Why AI fails with image classification and computer vision
Full version of my article on CNN about the benefits and shortfalls of using AI technology for physical security at schools.
Pre-editors cut version of my article: Opinion: I study school shootings. Here’s what AI can — and can’t — do to stop them
Can AI weapon detection stop the next school shooting? Maybe.
Artificial Intelligence and school shootings are two of the biggest topics in education. School security has transformed from a niche to a multi-billion dollar industry as shootings have increased 10x in the last decade.
In resource constrained environments like public schools, AI offers incredible potential of automatically detecting threats faster than any human. There is not enough time, money, and person-power to watch every security camera and look inside every pocket of each student’s backpack. When people can’t get this job done, using AI technology is a powerful proposition.
I’ve collected data on more than 3,100 shootings since 1966 plus security issues like swatting, online threats, averted plots, near misses, stabbings, and students caught with guns.
Based on my research, there’s no simple solution because school security is complex. Unlike airport terminals and government buildings, schools are large public campuses that are hubs for community activities. A weeknight at a high school might have varsity basketball, drama club, adult English language classes, and a church group renting the cafeteria. Amid this flurry of activity, a suicidal teen–who has been plotting a school shooting for months–can stash guns in the school bathroom because he knows there will be security at the front door when classes start in the morning.
What is AI weapons detection?
Two common applications of AI right now are computer vision and pattern analysis with large language models. These provide the opportunity to monitor a campus in ways that humans can’t.
AI is being used at schools to interpret the signals from metal detectors, classify objects visible on CCTV, identify the sound of gunshots, monitor doors and gates, search social media for threats, look for red flags in student records, and recognize students’ faces to flag unrecognized intruders.
This software functions best when it’s addressing well understood and clearly defined problems like identifying a weapon or an intruder. If these systems work correctly, when a security camera sees a stranger holding a gun, AI software flags the face of an unauthorized adult and object classification identifies the gun as a weapon. These two autonomous processes trigger another set of AI systems to lock the doors, call 911, and send text message alerts.
How AI works
With school security, we want certainty. Is the person on CCTV holding a gun? We expect a yes or no answer. The problem is AI models provide “maybe” answers. This is because AI models are based on probability.
For an AI metal detector, the hardware that scans each student creates new data that an algorithm compares to the patterns from when weapons were present in the training data. The software then makes a series of ‘yes’ or ‘no’ choices.
A simple example is predictive software that has two classification options—“Cat” or “Not Cat”. There are four possible outcomes from this:
For AI classifying images as a weapon, an algorithm compares each new image to the patterns of weapons in training data. AI doesn’t know what a gun is because a computer program doesn’t know what anything is. When an AI model is shown millions of pictures of guns, the model will try to find that shape and pattern in future images.
If there’s a pattern that matches the shapes in the training data, the software assigns a probability that it could be a match. The shape could be a broom, tree branch, shadow, or a gun. It’s up to the software vendor to decide the probability threshold between a gun and not a gun.
This is a messy process. An umbrella could score 90% while a handgun that's partially obscured by might only be 60%. Do you want to avoid a false alarm for every umbrella, or get an alert for every handgun?
AI software interpreted this CCTV image as a gun at Brazoswood High in Texas, sending the school into lockdown and police racing to campus. The dark spot is a shadow on a drainage ditch that is lined up with a person walking.
Humans + AI = ?
Adding a “human in the loop” doesn’t solve this problem. A person watching the security camera wouldn’t see a gun because they know the ditch is there. Once the AI system flagged this image, a human reviewer verifying images is psychologically primed to see a gun too.
Cameras generate poor quality images in low light, bright light, rain, snow, and fog. Should a school be using AI to make life or death decisions based on a dark, grainy image that an algorithm can’t accurately process? A large transit system canceled the same vendor’s contract because the software couldn’t reliably spot guns. Schools need to understand the limits of what an AI system can–and cannot–do.
Tesla is the most well funded computer vision project in the world. Their cars keep crashing on autopilot because even the best AI software still can’t overcome the inherent limitations of a camera in bad weather and poor lighting. There’s a reason self-driving cars are tested in Phoenix!
During a recent test, a Tesla drove full speed into a wall that was painted to look like the road. This is because only analyzing images (Tesla’s use cameras and computer visions instead of LIDAR and spatial sensors) can’t fully capture the features of the real 3D world.
With cameras or hardware, AI isn’t magic. Signals produced by a metal detector are based on the physics of how metal interacts with magnets. A metal water bottle and metal gun will both set off the alarm. Adding AI software to a magnetometer doesn’t change the physics of a gun and water bottle producing the same signal. This is why an AI screening vendor is being investigated by the FCC and SEC for inaccurate marketing claims made to schools across the country.
Costs and Lobbying
The biggest expense with school security is the physical equipment (cameras, doors, scanners) and the staff who operate it. The dream scenario for a vendor is putting software onto existing infrastructure because there’s profit with minimal equipment cost.
This creates a reverse incentive structure for vendors to install software onto existing systems even if it's not going to perform well because it still generates revenue.
Instead of organic product adoption based on merit, vendors lobby to structure government funding to create a shortlist of specific products. During a period of rapid AI innovations, schools should be able to select the best product available instead of being forced to contract with one company. Some vendors have even used the post-9/11 and now woefully outdated DHS SAFETY Act as a tool to earmark funding. The self-certification paperwork doesn’t involve testing by the federal government yet there is specific language about being “SAFETY Act certified” written into state education funding.
AI works better with better data
An AI metal detector is a more expensive and confusing version of a metal detector. Paying for image classification software onto security cameras creates a false sense of security and erroneous alerts because of unavoidable flaws in the quality of images.
When schools are unique environments, they need security solutions–both hardware and software–that are designed from the start for schools. This requires companies to analyze and understand the characteristics of gun violence on campus before developing an AI product. For example, a scanner that is created for sports venues that only allow fans to carry in a limited number of items is not going to function well in a school where kids carry backpacks, binders, pens, tablets, cellphones, and metal water bottles each day.
For AI technology to be useful and successful at schools, companies need to address the biggest and most common security challenges. From studying thousands of shootings, the most common situation is a teenager who habitually carries a gun in their backpack and they fire shots during a fight. Manually searching every student and bag is not a viable solution because students end up spending hours in security lines instead of classrooms. Searching bags is not an easy task and shootings still happen inside schools with metal detectors.
Neither image classification from CCTV nor retrofitted metal detectors address the systemic problem with teens freely carrying a gun at school each day. Solving this challenge requires better sensors with more advanced AI than any product available today.
Future
The current landscape of school security is drawing on the past instead of imagining a better future. Medieval fortresses were a failed experiment that ended up concentrating risk rather than reducing it. We are fortifying school buildings without realizing why European empires stopped building castles centuries ago.
The next wave of AI security technology has the power to make safer schools with open campuses that have invisible layers of frictionless security. If and when something does go wrong, open spaces provide the most opportunities to seek cover. Children should never be trapped inside a classroom again like they were in Uvalde.
Schools sit at the brink between a troubled past and a safer future. AI can either inhibit and enable how we get there. The choice is ours.
—
David Riedman is the founder of the K-12 School Shooting Database, an open-source research project that documents gun violence at schools back to 1966. He hosts the weekly Back to School Shootings podcast and writes School Shooting Data Analysis & Reports on Substack.








