How AI can benefit education and educators

QS EduData Summit 2022

Among the keynote speakers at this year’s QS EduData Summit – held at the United Nations Delegates Dining Room in New York and online − was Intel’s AI Ethics Lead Architect Ria Cheruvu.

Ria, who joined Intel aged 14 and graduated from Harvard University with a master’s degree in data science aged 16, leads a team responsible for the development of trustworthy AI technologies.

She spoke at the QS EduData Summit on the theme of Education and the Pursuit of Curiosity, alongside expert speakers from QS, Google, UNESCO and MIT. We asked her perspective on changing attitudes towards AI and the opportunities for AI in higher education:

Do you think the pandemic has changed people’s attitudes towards technology, AI and data management? 

Absolutely! The pandemic has certainly revealed many benefits and problems associated with technologies and the societies that power them.

On one hand, we saw exploitation of the pandemic through an increase in scams. We also saw an uprise in the use of AI and statistical models to predict COVID-19 cases, where a large subset of these models led to unreliable predictions and were not usable in real-world clinical settings. Tech fatigue associated with too much exposure to technology has also been a large concern.

On the other hand, the pandemic has helped vigorously alert us to the close dependencies we have on technology infrastructures; the need for enforcement of sensitive and secure data measures; and helped us develop critical thinking skills and awareness to tackle disinformation and scams. We’ve also been able to identify new positive use cases for AI systems, such as social distancing and temperature checking, and AI-enabled teleconferencing with accessibility features such as live captioning.

In 50 years’ time, which application of AI do you think will impact our everyday lives most?

My prediction is that 50 years from now, human-centered AI (HCAI) systems will become incredibly prevalent in our everyday lives. These systems would be able to offer support and personalization at individual, group, and societal levels, elevating core values of our society into the way we interact with each other and with technology. Promising examples of HCAI today include robots that care for the elderly and play with children, and smart voice assistants that help us with task organization and efficiency.

I think there will be many more exciting applications for HCAI in education, improving the way we learn, communicate, and innovate!

When talking to people who are concerned about ‘the machines taking over’, how do you reassure them that this isn’t the case?

AI systems are increasingly being woven into digital infrastructures with a large footprint on society. I see that this proliferation is the core premise behind widespread concerns around ‘the machines taking over’.

There are many alarming problems with the use of AI systems today that rightfully lead to these concerns, including manipulative power dynamics of human stakeholders, AI’s impact on the workplace, the ability to easily trick AI, and widespread bias in the data used to train AI systems. Today, we see the emergence of dedicated interdisciplinary teams who are working to identify and solve these problems from multiple angles: Ethics, psychology, computer science, law, and more.

The concerns have also given risen to much greater claims around our future, some of which are pivotal and others that are exaggerated. I believe that a dystopian picture of AI systems “wanting” to control humans is not in our future. These concerns of the uprise of ‘AI overlords’ are often conflated with the real immediate issues: The human stakeholders creating and pushing for AI technologies must be trustworthy and encourage healthy value alignment of AI systems.

I am optimistic that we will be able to chart a safe and responsible path ahead for AI systems so that, as a society, we are able to reap the benefits of the proliferation of AI into our everyday lives.

Is there an application of AI that you think could have a particularly transformative effect on teaching and education?

In my opinion, as with many AI use cases, there are fine balances to tread with the introduction of AI models into teaching and education. The key question to answer here is: How can AI systems best enable teachers, students, guardians, schools, and organisations?

AI models may offer the greatest benefits when it comes to personalisation of learning journeys. In my keynote, I explain a few learning practices inspired from my own journey where I believe AI can help bring positive transformation. One example is an AI system that could help identify and match students’ unique interests and talents to the skillsets they need to acquire. I think this is particularly true for when students complete open-ended projects, where AI could help enable creative exploration and help students succeed, beyond just being used for information retrieval.

With machine learning models which are based on historical data, is there a danger of repeating past assumptions or even biases – and, if so, how do you go about correcting for this?

The short answer is: yes! In some cases, repeating past assumptions that are accurate is a good thing: It shows the AI model is consistent in its predictions (i.e. its representation or assumptions) – even though there may be changes in the input data.

However in the case of incorrect assumptions and, in particular, biases, it is critical to apply mitigations such as re-training the AI system.

Often times, the first step for correction is to categorise the assumption and its risk level, followed by identifying the root cause. But how do we identify that the mistake has even taken place? It is possible to define thresholds so changes in the model’s behavior or in the input data are flagged and stakeholders can get involved to apply the corrections.

Here, bias becomes a tricky problem, as quantitative metrics are often not enough to identify the risk of bias. We need to look at the downstream impact of the AI model on end users and other stakeholders. The task of identifying bias and similar types of assumptions require detailed risk management practices, so that corrections can be applied at levels of the technology and the organisation.

If you were to design a master’s programme to teach students the skills and aptitudes you need for your current role at Intel, what would that look like? 

I strongly believe an interdisciplinary approach should be applied when designing higher education programmes in the AI space.

The core and requisite technical skills include acquiring a strong background in computer science, and foundational statistics and data science knowledge. To best prepare AI practitioners, this curriculum should include courses on data processing and management, model training for different types of AI algorithms (moving away from an exclusive focus on Deep Learning-based methods), and best practices for optimising and accelerating AI models using cloud services and HW platforms.

Offering courses around ethics and philosophy enables students with the approaches to break down key societal values, such as morality and meaning, and identify their own biases through logical deduction and other techniques.

Additionally, there are three key soft skills that I think are critical to teach students to succeed at AI: Communication skills, leadership, and time management. I think that the pandemic has led to an increased emphasis on AI talent that demonstrates these skills, including due to the adoption of remote workplaces, highlighting the need for communication skills that bridge the virtual gap. We also now see a greater recognition for the importance of identifying and presenting quality data correctly and ensuring that technologies are trustworthy for its users.

Visualisation and presentation are an early element introduced as part of data science curricula – introductory courses often start with getting students familiarised with visualising the features of their dataset and the performance of AI models. A master’s curriculum that shows students how to translate these skills into an aptitude for business communication is critical in my opinion. Finally, I think self-care, under the time management umbrella, is an important mention! It’s easy to get overwhelmed with the multitudes of research and developments in AI happening every day – I think the pandemic has taught us that it is important to take time to pause, self-introspect, and consider the roles we play to ourselves and the planet.

Ria Cheruvu spoke at the QS EduData Summit at the United Nations Delegates Dining Room in New York from 8-10 June 2022. We also recently spoke to fellow QS EduData panelists and speakers Dr Paul Thurman, Professor of Management and Analytics at Columbia University, and Nick Creagh, QS Chief Data and Analytics Officer. Registrations are still open to access on-demand content from the QS EduData Summit 2022.

Find out more about QS EduData 2022

Related QS Insights

No insights found.

Sign up for industry insights

Receive the latest insights, expertise and commentary on the topics which matter most in higher education, straight to your inbox.

Sign up