A commercial boom around using artificial intelligence in the classroom is creating a slew of privacy and safety hazards well before Washington grapples with the fast-moving technology.
Dozens of Arizona school districts have been vetting technology vendors to weed out products that might use student data for advertising. Schools in West Virginia and Montana have started to boost their security using facial recognition systems even though it has a high rate of false matches among women and children and is already a concern across New York.
A rush to create AI tools for K-12 education has attracted many businesses that aren’t familiar with the tighter privacy laws that govern kids, increasing the risk that their information will end up with heedless vendors, experts say. That knowledge gap, and the lagging federal support, is forcing state and local leaders to navigate protections for young people on their own as schools also turn to technology for personalized tutoring and lesson planning.
“There hasn’t been a whole lot from the federal government,” Christine Dickinson, technology director of Maricopa Unified School District, south of Phoenix, Arizona, said in an interview. “We’re hopeful that there is some guidance, however, we’re going to go full steam ahead with making sure that we have all of the tools in place for our students to be successful and our teachers to make sure that they can uphold that academic integrity in their classrooms.”
Oregon provides a checklist and other materials for schools looking to develop generative AI policies while California is directing schools on how they can integrate AI in the classroom in a way that prioritizes student safety. Mississippi expects to release school AI guidance in January, and Arizona is forming a committee in early 2024 to recommend policy procedures for implementing and monitoring the technology in schools.
“There’s the balance of mitigating the risk that [AI] probably poses in terms of data privacy and bias and the equity implications, versus the opportunities it presents,” said Charlene Williams, director of Oregon’s state education department, who noted that the state received some federal input on its guidance.
The Covid-19 pandemic set off a boom in education tech as students and educators suddenly went virtual — a swift change that also sparked a federal crackdown on the industry over allegedly lax privacy practices.
Earlier this year, the Federal Trade Commission filed a complaint against the now-defunct ed tech company Edmodo, accusing them of violating COPPA, a federal law that bars using students’ personal information for advertising without parental consent, among other violations. The complaint followed the FTC’s warning to the ed tech industry last year that the commission was monitoring COPPA compliance.
Now the FTC is proposing to codify that long-held guidance that bars schools from authorizing data collection on children under age 13 for commercial purposes like advertising.
The scramble to adopt technology during the height of the pandemic led Arizona’s state education department to beef up data privacy and security practices in 2022, according to Dickinson. The Maricopa school district, home to roughly 8,000 students, and others across the state, now work with 1EdTech, which helps them maintain a dashboard of ed tech vendors that have been vetted to work with children. The clearance process ensures the companies have met the district’s data agreement and comply with COPPA and FERPA, two federal privacy laws aimed at protecting children online and student education records, respectively.
But as the pandemic eased and students returned to in-person instruction, artificial intelligence began accelerating existing concerns with education technology.
In addition to making policy and procedure recommendations, Arizona’s forthcoming AI committee plans to issue guidelines on AI in the classroom that would supplement recommendations made at the federal level.
President Joe Biden’s sweeping executive order on AI gives the Education Department roughly a year to develop more resources that address non-discriminatory uses of the technology. Additionally, the department hopes to release an “AI toolkit” in the spring to help schools implement the agency’s policy recommendations. The toolkit would include direction on “designing AI systems to enhance trust and safety and align with privacy-related laws and regulations.”
Parents are also worried about what’s being done with their child’s data. About 57 percent said their child’s school or district hasn’t asked for their input on how to “securely and responsibly use student data and technology,” according to polling from the Center for Democracy and Technology conducted between June and August 2023.
Hannah Quay-de la Vallee, a senior technologist at the center, said that states can play a larger role in vetting and approving the privacy practices of vendors which could prove beneficial for smaller districts that don’t have the capacity to do so.
“Both discrimination and privacy violations can result from lack of information about the system, lack of auditability of the system,” Quay-de la Vallee said, noting that there’s some overlap in how education leaders address both.
There’s also legislation in Congress that would create more federal oversight and require some companies to report how their technology could impact consumers. But lawmakers, particularly in the Senate, are still mapping out the path for AI governance.
The Algorithmic Accountability Act, though not an education-specific bill, would affect ed tech companies and non-education-specific vendors contracted by schools. The bill would give the FTC more governance over some tech companies by requiring them to assess their AI systems for a range of factors like bias and effectiveness.
After a legal challenge and subsequent moratorium, New York banned the use of facial recognition in schools in September after the state found the use of the technology for security purposes “may implicate civil rights laws,” noting that it could lead to a “potentially higher rate of false positives for people of color, non-binary and transgender people, women, the elderly and children.” But in Montana, the state barred the continuous use of facial recognition technology by state and local governments but carved schools out of the ban.
Sen. Ron Wyden, an Oregon Democrat and a lead co-sponsor of the Algorithmic Accountability Act, told POLITICO he’s “generally an opponent of facial recognition,” noting that “a lot of the systems are deeply flawed.”
“Facial recognition in schools should be banned until there is clear evidence that it is accurate, would actually improve safety in schools, and won’t be used to target Black, Hispanic and other students of color,” Wyden said in a follow-up statement to POLITICO.
Bree Dusseault, principal and managing director at the Center on Reinventing Public Education who has been tracking state guidance on AI in the classroom, underscored the significance of getting out preliminary AI resources now as schools are already seeing the technology used in the classroom.
“We learned this during the pandemic, that some [school] systems make choices to get out ahead and still put flags in the sand and try to put some initial thoughts and guidance out there — and others do not,” Dusseault said. “And those have different implications on the students and the educators in those systems.”