5 GenAI principles for K-12 education

Imagine this: A new technology has arrived, drawing enormous public discussion. Among the questions is how it might be used in schools. Advocates call for its widespread adoption, saying it will revolutionize schooling and motivate students. Others are wary, fearing it will make kids lazy.

The year is 1922; the leading advocate is Thomas Edison, who has a special place in my heart for founding what became General Electric (GE) in my hometown of Schenectady, NY. He promised that his invention, the motion picture, “is destined to revolutionize our educational system”—even replacing textbooks.

A century later, it’s safe to say that Edison’s revolution didn’t play out exactly as he pictured (nor justify his critics’ worst fears), but it surely had profound effects. Movies didn’t replace the written word—nothing can—but they put new tools at teachers’ fingertips. Like waves of technological innovation that followed, movies also presented schools with crucial choices about their responsible use in ways that both benefit and protect students.

Introducing generative AI

And now comes generative AI (GenAI), best recognized as the chatbots that have exploded into popular awareness. Once again, schools will be a place for crucial decisions. Companies like ours—the largest provider of learning solutions to K-12 schools—have a role here, and I believe

we should publicly state principles to keep students and teachers at the center of GenAI development.

Half of educators say they are currently using generative AI, and it is saving them time. Recent studies suggest that teachers spend over 50% of their time on non-teaching tasks—imagine what could be possible if they spent more of that time directly connected to students and teaching.

That idea, I believe, only scratches the surface of AI’s potential benefits. AI tools can enhance teachers’ productivity by helping them plan lessons and activities, convert text into presentations, and create summaries of texts—just to name a few. AI tools can also help enhance students’ literacy learning withpersonalized learning experiences—such as providing teachers with suggested feedback and revision on student writing.

It’s exciting, but AI will earn the trust of schools, teachers, families, and education leaders only if it’s used with wisdom, guidelines, and safeguards that ensure it genuinely supports teachers, benefits students, and never compromises children’s privacy or safety. That’s why we’re outlining five recommended principles we believe should guide the responsible adoption of AI technologies in K-12 schools.

Keep teachers at the center

The teacher-student relationship is crucial. We believe in a “high-tech, high-touch” approach in which technology should support, not mediate, this connection. Teachers are closest to the educational experience, and their voices must also inform the development of new technologies intended to serve them.

Teachers will need support and professional development to build “artificial intelligence literacy” to effectively leverage the technology in the classroom. Most educators (76%) identify a need for education on ethical AI usage and its integration into the classroom.

Uphold student privacy, safety, and well being

Protecting student privacy and data is non-negotiable. Existing federal laws provide strong protections that must apply to the new uses that may be associated with GenAI. Many state laws also protect children’s and students’ privacy, and third-party organizations must uphold and promote data privacy and student safety.

Lawmakers should ensure that existing laws and regulations properly account for and clarify how these levers can be used or applied to GenAI.

Ensure responsible and ethical use

Families need to understand how GenAI is being used in schools—without being overwhelmed with information that’s too detailed or technical to understand. Federal and state policymakers should work with AI experts to determine appropriate disclosure requirements and provide guidance for how districts and schools can access the information they need about GenAI systems they choose to use.

Encourage continuous evaluation and improvement

Systemic integration of AI into education technology and practice requires analysis of which strategies work, for whom, and why. Creating a culture of ongoing evaluation and improvement will ensure the technologies genuinely support teaching and learning. Even these trials must include guardrails to protect student privacy, safety, and wellbeing.

Prioritize accessibility and inclusivity

As classrooms become more diverse in demographics and learning needs, GenAI tools can equip teachers with personalized approaches, recommendations, and supplemental materials to meet each student’s needs. As new bias, equity, and accessibility considerations emerge with the use of GenAI, regulations need to evolve.

Our schools, like our society, face the task of defining guardrails for a field that’s evolving with astonishing speed. Policymakers, and companies like ours, must put empathy, safety, and privacy at the forefront to maximize the benefit that these technologies will surely have to elevate teaching and learning.

Jack Lynch is CEO of HMH.

https://www.fastcompany.com/91258146/5-genai-principles-for-k-12-education?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 1mo | 10 gen 2025, 19:50:09


Accedi per aggiungere un commento

Altri post in questo gruppo

OpenAI cofounder Ilya Sutskever’s new AI startup is fundraising with a $30 billion valuation

A new artificial intelligence company from one of the cofounders of OpenAI is quickly becoming one of the most highly valued AI firms in an increasingly crowded marketplace. Ilya Sutskever’s Safe

19 feb 2025, 01:20:08 | Fast company - tech
Elon Musk faces more conflict-of-interest questions after DOGE fires FDA staff reviewing Neuralink

Since the moment the Department of Government Efficiency (DOGE) was first proposed, Elon Musk’s critics have warned that the world’s richest man was at risk of making decisions that could be a con

19 feb 2025, 01:20:06 | Fast company - tech
AI hallucinations could get lawyers fired, law firm says

U.S. personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intel

19 feb 2025, 01:20:05 | Fast company - tech
Why CX is every brand’s biggest opportunity

The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual

19 feb 2025, 01:20:04 | Fast company - tech
Federal workers fired by Elon Musk’s DOGE are sharing their anxieties on Reddit

Elon Musk’s aggressive push to cut government spending reached new heights on Valentine’s Day, as employees across

18 feb 2025, 18:20:07 | Fast company - tech
‘The bravest thing I’ve ever seen’: A lone anglerfish has the internet in tears

A lone anglerfish has captured the internet’s heart. 

Usually found 6,500 feet under the sea, this black seadevil was filmed by marine researchers in Tenerife swimming towards the w

18 feb 2025, 18:20:06 | Fast company - tech