Through Gates Foundation Grant, RISE Explores Uses of AI

Data Tools and Practices

Just about a year ago, educators and our entire society were rocked by the release of ChatGPT, the artificial intelligence chatbot developed by the company OpenAI. Since then, our field has been grappling with the implications of this new technology and how widely available it has become. In the field of education, we’ve seen reactions that range from technological evangelists (“This will finally allow us to ensure every student gets what they need and close all of our equity gaps!”) to pessimists (“No student will ever do their own work again, and teachers are going to be replaced by robots!”) to those who have been jaded by technological advances in the past (“It’s doubtful that much will change either way.”) At RISE, we reacted to the innovations in artificial intelligence (AI) the way we react to anything else: by asking ourselves how AI will impact our mission to ensure that students graduate with the skills and confidence they need to achieve postsecondary success by leading school networks where educators use data to learn and improve.

RISE is well-positioned to consider the implications of AI on the work of schools and districts. AI runs on data, and RISE has a fantastic data team staffed with engineers, developers, analysts, and researchers who work to ensure that data is strategically organized, analyzed, and surfaced in ways that make it actionable for educators to support their students. At the heart of everything we do is a fundamental commitment to keep student data safe and private by building secure tools with protections that exceed industry standards. We have a history of leveraging our team’s expertise to turn raw data into useful products: our report on remote learning was one of the earliest indicators of how students were faring when schools closed early in the pandemic, and our Data Hub has been lauded by educators who tell us it has become an essential tool they use to do their job.

This history put RISE in a great position to receive a grant from the Bill & Melinda Gates Foundation earlier this year to explore how AI could be used to support educators in achieving strong and equitable student outcomes. 

As a learning organization, our first steps have been to learn as much as we can about this new technology. We have read research, policy, and opinions about AI and have spoken to a range of experts and practitioners as diverse as leading AI authors, researchers from renowned universities, education technology professionals, and administrators from public college systems. Our learning has been driven by this set of questions:

  • What is AI, and how is it currently being used in the field of education?
  • What does ethical AI look like, and how do we center safety and privacy around student data?
  • What would need to happen for us to be able to implement an AI solution?
  •  What are some possible use cases of AI in the context of RISE?

As we make progress in our work, we are eager to share what we’re learning and how we’re thinking. It is important to us to live our value of transparency with our community and networks and contribute to the rich conversation happening in the field. Following are some of our initial takeaways from our research and exploration into the complex subject of AI in education.

What is AI, and how is it currently being used in the field of education?

Artificial Intelligence is a very broad term that applies to any technology that emulates some aspect of human intelligence. While the topic has proliferated over the past year, AI in general is not new – we talked to people who have been working on AI applications to education for 20 years! The most talked about form of AI, which has also experienced the most rapid recent technological advances, is something called Generative AI (GenAI). Popularized in everything from ChatGPT to customer service chatbots on many commercial websites, GenAI is unique because it uses huge amounts of data, like a Large Language Model (LLM) learning patterns from huge amounts of text, and then uses those patterns to create original content, like sequences of words that sound like human language. Other forms of AI have been around for a long time. For example, Machine Learning (ML) is an approach that uses large amounts of data to create an algorithm and uses that to predict a certain outcome. You’ve likely interacted with machine learning if you’ve ever been fed recommendations for what to watch on a streaming service or what to buy from an online retailer.

Over the past year, AI applications to education have been exploding. Some of the most common applications include AI tutors that can interact helpfully with students and AI teaching assistants that can help teachers generate first-draft lesson plans and other materials. Interestingly, we have not seen many applications of AI to support educators in making better use of student data.

What does ethical AI look like, and how do we center safety and privacy around student data?

There are many ways that applications of AI can be unethical, so it is essential to be aware of these risks and build in safety from the very beginning. Ensuring ethical uses of AI starts with a mindset: just because we can do something doesn’t mean that we should do it. Before deciding what, if anything, we should do with AI, we must a) guarantee the security and privacy of student data, b) be transparent about how we’re using AI and why, and c) ensure that the use of AI will disrupt, not reinforce, biases and inequities and hold ourselves accountable to seeing that play out. For example, one education administration expert spoke about using AI to better predict which students would struggle in a higher education setting. She saw the risk that this information could be used to exclude students, so she built the system such that the information was only available after a student was admitted to the institution to ensure that the AI prediction could only be used to support and include disadvantaged students, not exclude them.

What would need to happen for us to be able to implement an AI solution?

At the outset of this project, implementing AI sounded big and scary. Wouldn’t we need big supercomputers and multiple PhDs to apply AI? In a word: no. We talked with multiple national experts who have overseen AI applications using computing power on a simple laptop with models designed and powered by a couple of junior-level staff. Given the recent advances in readily available technology, the technical burden on each AI application has significantly decreased. For example, one education technology expert shared that his tool’s use of AI amounts to restructuring educator inputs so that they can be fed into ChatGPT and return useful results. This practice, known as prompt engineering, is complex, but it doesn’t require advanced degrees or sophisticated technology. What it does require, in a sentiment that was echoed by almost every expert, is a learning mindset and a laser focus on what you are trying to achieve.

What are some possible use cases of AI in the context of RISE?

It is too early for us to say what we should implement at RISE, but one of the objectives of our exploration is to broaden our understanding of what we could implement. Here are a few use cases that illustrate the power that AI could have to help us achieve our mission:

  • Educators love using the RISE Data Hub, but sometimes it takes a bit of clicking to find the data visualization that would be most helpful. Imagine an AI-powered chat feature that could allow an educator to type in a natural language prompt, something like, “Show me the racial disparities in our ninth-grade math class outcomes,” or “Find 10 students whose grades recently dipped below passing,” and the Hub would be able to produce a graph, list, or other visualization that met the need.
  • Part of RISE’s work involves breaking down barriers across districts and building learning communities. Imagine a team at one school deciding to focus on a problem, like grades dropping in the second quarter of freshman year, and using an AI-powered algorithm to connect them with other teams in other districts who are either working on the same problem or have recently experienced success improving that problem. That connection could blossom into a fruitful cross-school collaboration.
  • RISE currently identifies the “risk or opportunity status” of graduating eighth graders to help ninth-grade teams prepare to give extra support to the students who might need it most. Currently, this status is based on a simple calculation of the eighth grader’s grades and attendance, but imagine an AI-powered algorithm that could learn what detailed factors are most predictive of ninth-grade struggles, and could use that algorithm to target supports to students as early as possible.

RISE is just at the beginning of our exploration of AI. Our next phase will focus on hearing directly from educators about the problems they are facing that AI could help solve, and the guardrails they would need to see to ensure that any AI application is ethical and safe. As we continue to hear from our community, we are optimistic that RISE is well-positioned to help realize some of the promise of AI in education while ensuring that this work continues to move us towards excellence, equity, and fairness for every student.