This week at my second ASU+GSV Summit, I had an opportunity to speak with the greatest minds and voices in EdTech. Almost ten years ago I graduated with my PhD in Education where I studied K-12 innovation adoption. With those two circles, ASU+GSV looks like an interesting Venn diagram.

AI breaks some of the known rules, like no longer requiring a top-down institutional adoption driven by hierarchical experts. AI also reinforces others, like the benefits of risk-based experimentation and value-driven use case implementation.
Let’s unpack the bags from this great trip.
Listening and Learning at 2026 ASU+GSV Conference
At conferences, we get to take a minute outside our day jobs to see what others are doing in our industry and learn from them. AI is a theme in every conference, no matter the industry so we have a lot of opportunities to learn how people are adopting AI. At ASU+GSV this week, I had the chance to listen to experts, from presenters on stage to fellow foodies at my table.
Claire Zau (1) talked about four stages of AI which was a clean way of looking at how we have adopted AI over the years. Perception, generative, agentic and physical. With Perception AI, we can think of Siri and the patterns this first mode was able to decipher. Generative AI gave us the ability to create content and build our skills with prompt engineering to get the images and text we wanted. Our move into Agentic AI unlocked an ability to execute actions and automate processes based on a set of rules, testing it first with HITL and then releasing it to act on its own. Finally Physical AI is the implementation of AI in our physical space – think Waymo self-driving cars.
Our risk increases as we progress through these four stages since the role of human judgement and decision-making reduces with each step. Many of us were quick to adopt Siri while those same early adopters are still being cautious about catching a ride in a Waymo.
Several speakers referred to the jagged frontier or jagged edge of AI where there are uneven benefits in using the technology. It refers to a 2023 paper studying knowledge worker productivity. “We introduce and study the concept of a “jagged technology frontier” to describe the uneven impact of artificial intelligence (AI) capabilities, where AI assistance improves performance for some tasks but worsens it for others, even within the same knowledge workflow and with a seemingly similar level of difficulty.” (2) They pointed to this jagged edge as a way of explaining the rate of project failure, the mixed perceptions on AI’s value, and the stutters we see in adoption and use case ROI. This jagged edge means that when we tackle tasks within the jagged edge we get great value when augmented with AI, like writing a job description or summarizing a given text. However on tasks that are more complex and have ambiguity, like many managerial challenges sitting outside the jagged edge, AI assistants slow us down with more friction and even some hallucinations that need intervention.
Our risk in relying on AI to assist us in our work is high where the tasks are complex so knowing what it does well (within the jagged edge) and knowing where it falls short (outside the jagged edge) is something we learn with more practice. This means we are likely to fail more often in early use while we test where the edge is in our work. Given the rate of improvement with AI, the jagged edge is constantly moving so what didn’t work before will work in the future… but we won’t know when and for which tasks.
This risk-based approach to evaluating technologies allows us to adopt low-risk solutions like asking Siri to play a song by Bad Weather (3) very quickly while we spend more time experimenting with our higher stakes use cases like managerial decision-making in times of uncertainty.
Are you risk-averse or a luddite?
Don’t be a luddite.
Given the risks in this jagged edge technology, we may hold back from diving into experimentation in favor of waiting for others to show us the way. With AI, waiting on the sidelines will hold you back. We all need to be in the game. If you are risk-averse, experiment on a no risk task in your area of expertise. Ask AI to help you do something you do well so you can assess its ability to handle the task, and test for where the jagged edge is for you. Use it for insights from the data in your logs or spreadsheets. Build an agent to summarize your emails from the day before while you get ready for work. Don’t be a luddite, waiting for someone else to do this for you since you know best what you know best so use it in that use case today.
Engage differently.
In the past, we could tell ourselves not to be nervous on a stage because we know more than the audience on that topic. After all, we were invited to speak about it! With the democratization of this technology, anyone can be an AI expert. There is no more sage on the stage for any of us based on our credentials. When you take the stage because you have been invited to speak about AI, remember there will be others in the audience who know more than you. When you are in the audience, look left and look right since you are surrounded by experts. Engage differently. Show up differently. Guide each other.
Bring your human intelligence (H+A)I.
So all that work to gain wisdom and that won’t be worth anything in the AI space? Au contraire! AI implementation needs ethics, trust and kindness infused at every step. We all need to dig in on AI to ensure the ethical and responsible use of AI. In education, we need to remove the bias, train our students on guidelines for use, and encourage representation in the experimentation from every group. We need to build trust in AI so we should be transparent about our sources of information, build a human in the loop (HITL) for validation of those jagged edge cases, and explain our results. Keep kindness in the process by recognizing when there are critical moments where people should talk to people. One example is if a student is concerned about graduating on time and seeking guidance on their class schedule. When the chat with your bot changes from transactional (e.g. how many credits do I need?) to emotional (e.g. am I in danger of losing my scholarship if I don’t pass this course?) the bot needs to refer to a person to handle the interaction with the student. People are built and experienced for that interaction, our agents are not.
AI in Education – Use Cases that Work
AI enables a system of action.
We can create an agent to make us more productive and help us learn. In education, as in many other fields, we need to preserve human contact for critical moments. The reason we go to a live concert, a conference or a game is to experience something together and connect with each other. We can use AI to increase our productivity on simple tasks and know those moments when we need to pass off to a person. The systems can take actions while we make connections. Relegating people to work on HITL for action confirmation misses a great opportunity to do the work we uniquely can do with our Human Intelligence (HI) with meaningful interaction. A great example of this is for AI to help a student with a personalized skill building activity. The teacher owns the student connection and uses their agent for tactical action.
Agentic reasoning and action.
We can use AI to make learning efficient and ensure learning is happening. Efficiency comes from the personalization of activities based on my interests and an assessment of my skills. I can bypass the topics where I have demonstrated mastery and the content where I have no interest. For example, if my self-assessment or a standards-based assessment shows that I have mastery of a standard, I can move on to the next one and have content presented to me in areas of interest for me like physics or basketball. The efficiency in learning cannot replace the productive struggle of learning, despite AI being very good at alleviating that struggle. AI *can* answer the questions for students but to ensure learning is happening, the AI *should* focus on content presentation and measuring the learning in concert with the educator. We can use AI to assist the educator in the assessment of student learning.
Agentic AI and Human Agency.
With the democratization of AI, many of us have the ability to use it in ways that can boost our agency. AI adoption is very different from other technology adoptions we have seen in institutional settings. In cloud computing, computer-based testing, and 3D printing, the school managed the access to these technologies because they made the purchase decisions and individuals did not have the ability to DIY solutions for themselves. AI is available to many so we can adopt the use of it as individuals, pointing it to a pain point or opportunity for growth that we value. One example from a district in Florida was where students built their own agents and increased their agency in the process.
Summary
There are fantastic examples of where AI is successfully adopted in education. The value we build with AI depends on taking a risk-based approach while experimenting, engaging differently in the process of learning, and keeping the AI focused on actions while we do what we do best.
Notes
NB: You can’t be an edtech CIO today and not wonder about the greater impacts AI will have on the education ecosystem and our physical ecosystem. This article does not tackle the tough issues we face about unintended environmental consequences. It is an issue to be tackled but beyond the scope of this article.
NB: This article is completely HI… typos and errors are all my own.
Citations
- Claire Zau on LinkedIn
- Dell’Acqua, Fabrizio, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine C. Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, and Karim R. Lakhani. “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of Artificial Intelligence on Knowledge Worker Productivity and Quality.” Harvard Business School Working Paper, No. 24-013, September 2023. (Forthcoming in Organization Science.)
- Bad Weather on Spotify