In my weekly perusal of education newsletters, I came across a Time magazine article about new attempts to bring AI and machine learning to monitoring student behavior on school devices. While the article focuses on student mental health—suicide prevention in particular—I looked into the companies mentioned therein and discovered that the scope of monitoring efforts is broad and deep. It is a fascinating and discomforting topic, with each company working on a different aspect of student safety with rhetoric to match.
The Cast
Here are four companies mentioned in the article along with a brief description of what they offer based on what I could find on their respective websites.
- Bark was initially a service for parents to monitor their children’s devices. After the Parkland, FL shooting they decided to offer a free service to schools that monitors Google Workspace and Office 365 activity. They provide abundant technical documentation and a philosophy-driven approach. Their safety support includes identifying personal troubles like depression, in addition to instances of bullying, abuse both within and outside of school, and flagging highly threatening events like shootings and suicide.
- GoGuardian is focused on student instruction. They have an LMS1A Learning Management System, or LMS, is a database meant to store all the information needed for a set of courses. Imagine tables containing textbook chapters, homework assignments, grading weights, etc. designed for individual learning experiences and keeping students on task. They also provide safety support focused on student mental health, emphasizing that such issues greatly hinder a student’s ability to learn
- Securly appears to be one of the oldest and established companies in this space. They more directly advertise AI tools, and have an incredible number of services that are difficult to parse. Their data monitoring tool is device-wide, so it can ingest social media use and messaging services unlike the other companies. Their overall strategy is similar to Bark, but their options to display information and guidance directly to students is unique.
- Gaggle has also been around for years, and takes a more severe stance. They are marketing themselves to those who are scared of the student safety situation. They offer therapy and crisis lines as part of their service, and take a hard line on the necessity of student monitoring.
The Good
In a world where we seem incapable of addressing the root causes of student mental health and safety—we vaguely wave our hands at “mental health” but never consider if we can do more to mitigate that crisis as a society—there are companies willing and able to address the symptoms. This includes helping schools identify students to prioritize for counseling and therapy, assessing bullying and abuse that occurs on school devices, and anticipating violent events like school shootings or suicides then intervening when necessary. There are also simpler issues that schools are legally required to handle like student safety when searching online. When I was in school, this amounted to device- and network-level filters and firewalls to keep students away from explicit and dangerous content. But the spectrum of problems that students and schools face due to the introduction of a highly connected world, which are exacerbated by regular access to devices, continues to grow. The scope of companies attempting to help schools appropriately address these problems has grown in kind. While it’s sad that these problems exist at all, there are several benefits to its resulting emphasis on student safety.
A key philosophy stated by these safety and monitoring companies is that students and teachers cannot effectively participate in their education if they don’t believe their school and people within are safe, welcoming, and trustworthy. When teachers and administrators know their students are provided the all necessary emotional support, they can be more confident in their daily work. That confidence can be instilled in students as well, who tend to be quite responsive to their environments at a young age. Student then spend more of their time in school focused on learning and not on the difficulties they may face when they go home, whatever that may be.
When done well and with an open line of communication between everyone involved, these tools can help reduce the stigma around mental health. They increase awareness that we can’t simply compartmentalize areas of life: students are living their lives with real problems, both in and out of school, and those problems affect their ability to learn and perform. By offering therapy phone lines or text channels students can access from their school devices, these companies let students learn about therapy as a tool they can and should use throughout life. When students feel they can safely use these resources, it’s a powerful way to both unburden the teacher and empower a student to get the assistance they need and improve their outlook on their own education. For students too young or uncomfortable to directly access these resources, data can help refer someone in particular need to professional mental health assistance.
It’s beneficial for teachers and administrators to gain insight into how their students are, and develop a sense of their emotional well-being. An empathetic educator can do well by this information, crafting learning experiences that can thoughtfully engage their students by being aware of who may be at risk of overwhelm and frustration. GoGuardian takes this exact approach. Their LMS offers personalized learning workspaces on a student’s device, but with oversight that shows a teacher how focused and on-task everyone is. In a world of indirect lessons done on a personal school device, it’s harder for a teacher to visually survey a class. For both learning and safety, these tools can provide the necessary information for teachers not to lose their students.
Student data is heavily regulated. I see this in my own company, and can attest to the onerous, but necessary, rules around student data storage, usage, and removal. To that end I believe every company of notable size in the student monitoring field is closely vetted and doesn’t use student data for anything overtly nefarious. So, how is the data being legitimately used? There are certainly benefits as stated above, and schools are generally recognizing the utility of ensuring student health and safety in a way other than arming teachers, but there is so much more going on than the initial pitch given by each company.
The Bad
The emergence of these companies, and the particular tools they employ to surveil and monitor students, worries me. I’m concerned both about the utility and accuracy of these tools, and whether schools will use the requisite tact to address any issues that arise. The existence of this monitoring at all creates poor expectations for students as they mature, and a digital panopticon is not a good precedent when we’ve already lost so much privacy among the adult population.
Most of these companies handle flagged safety issues via an escalation model based on some combination of severity and confidence: a dashboard provides schools with anything caught by the monitoring software, and is subsequently classified by the kind of issue, how potentially dangerous it is, and how confident the model is that its classification is accurate. Bark is the most open in explaining this process, but others appear to have a similar approach. They also all use human review as a way to confirm aggregated data and accelerate response to immediate threats. Any of the third-party counselors or therapists are trained as mandatory reporters.
All of the above is reasonable on the surface, but requires a clear view of what needs to be prioritized. How confident is the company, and the school, that a student needs intervention immediately, and how do you approach that? Because the monitoring companies increasingly rely on machine learning to process the incoming data, there will necessarily be false positives and negatives depending on how the model is tweaked. False positives could be hugely damaging to a student’s view of themselves, and undercut any feeling of trust and safety they had while working on their school device. What did they do that led to this interaction? Should they trust their own teachers or peers? Is there something wrong with them?
Bark offers schools flexibility in the parameters they choose, so a school can be notified more frequently with the expectation that more of these will be false positives, or focus on the most damaging and dangerous interactions which will lead to some students being missed early on. I think this is the correct philosophical approach because ultimately it should be up to the school to decide how to work with their students. But this requires a new realm of decision-making among administrators.
Who is charged with sifting through the issues reporting by the software? How do they use this information to allocate counselor time for students? Consider a student that needs assistance—perhaps they self-identified as such—but they aren’t flagged by the system because they don’t do anything inappropriate on their school device. If the system is working “efficiently” then most of the counselor time is already allocated to students who “need” it. This leaves someone else unable to obtain support.
A fascinating approach taken by Securly is proactive messaging given directly to students on their device. Imagine working on an assignment on your laptop, and sending a message to a friend out of frustration because you find it confusing or difficult. Then you get a pop-up banner at the top of your screen with some milquetoast message of encouragement. How would you feel? For a small percentage of students that may be useful, but I suspect most would find this annoying at best, and many would have a sense that their privacy had been invaded. Considering I’m already skeptical of how well the educators receiving this information can thoughtfully respond to each issue, I’ve no faith in software to have the requisite tact and confident presence necessary to help a student.
And what about these privacy concerns that would be felt by both students and parents? Here is a hypothetical exchange within the For Parents section of Gaggle’s FAQ, which I found quite telling:
Question: You are invading my child’s privacy! Well, aren’t you?
Answer: Most educators and attorneys will tell you that when your child is using school-provided technology, there should be no expectation of privacy. In fact, your child’s school is legally required by federal law (Children’s Internet Protection Act) to protect children from accessing obscene or harmful content over the internet.
I first noticed the tone this question is written in. It is the voice of a frustrated and potentially aggressive parent, which is a surprisingly direct and passive-aggressive approach to writing an FAQ. It comes off as condescending to anyone who has this question.
Its answer is equally tactless. It directly states that there is no privacy to invade in the first place, and also implies that the privacy invasion is a legal obligation. This is purposefully misleading: schools are indeed required to help protect children, but most of that can be done via content filtering. That is sufficient, and wholly different from actively monitoring what a student does on their device and subsequently passing that data along for analysis and potential action. If a successful company is blatantly open in their opinions about student privacy, it heavily suggests many district leaders concur.
Students should have some reasonable expectation of privacy.2I have less of an issue philosophically when an employee has monitoring software on their work computer, because adults do know better. I just think that’s poor management, rather than an injustice. Just as a child’s room should be a safe place for them to exist in the real world, they should have a safe place to explore the digital world. The increased funding for personal student devices represents a potential boon to this, giving access for children whose families could otherwise not afford any way for them to be connected in a way that is expected in our current world. By adding any full-device monitoring system, we are stripping this away in the same way adding a camera to a child’s room would be a terrible way to build trust and understanding.
We can still provide students with these devices, but focus on putting boundaries around the experience. This will never be perfect—I remember my tech-savvy classmates regularly circumventing school firewalls—but kids also sneak out of the house to experience the world. Those who are dedicated to escaping over these digital walls will find a way, and most will be fine with the given situation.
Furthermore, normalizing digital surveillance for the next generation is a poor choice. We’ve given up so much of our ability to exist as individuals both in public and online already, and though some effort has been made to wrest some of that control away from corporations, governmental institutions continue to push their way for more access. Forcing this approach onto children is short-sighted. As mentioned above, we’re attacking a symptom instead of a problem, and I expect this attempt will only exacerbate the underlying problem by making children wary of attempts to diagnose them, subsequently eroding their trust in the institutions charged with preparing them for adulthood.
The Ugly
My research into this topic was a wild journey. After reading the Time article above, I came down fairly strong against this entire industry, particularly the specific approach of using these monitoring tools to triage students who may need counseling and other mental health aid. It was disturbing and uncomfortable.
I first looked into Bark, and came away surprised by how coherent, sensible, and sensitive they were to my specific concerns. They were not doing whole-device monitoring, instead focusing on school-specific applications. They were abundantly transparent about their training process, how human review works, and their method for escalating issues when they are immediately dangerous.
Gaggle presented a horrific about-face in attitude. In addition to the frustrating question above about student data privacy, they had this quip a question about pricing:
Perhaps a better question to ask is, “How much will it cost your school or district if you don’t use Gaggle Safety Management?”
This is a disgusting response. What if that was the slogan for Red Cross First Aid training, or anything else pitching itself as saving lives? The explicit fear-mongering helps nobody, and is only a few steps away from a mob shakedown. What worries me more is that if these FAQ items remain as-is, it means that they are an accurate reflection of some educators out there. Gaggle is successful because the story they tell—wholly different from its competitors Bark, GoGuardian, and Securly—resonates with enough people with the power to purchase the service.
Teachers and students (and parents) are scared of the world we’re in, and for many good reasons. Student mental health is flagging, and that affects how effective a school is as a place for learning. However, twisting the knife into that fear serves nobody except the company and blatantly shows how little they actually care about improving the lives of this generation that we see is in so much turmoil.
These tools will continue to exist, but I hope they can focus on matters of trust and safety, building an experience that is fruitful for education and student growth, rather than privacy invasion and fear-mongering.
- 1A Learning Management System, or LMS, is a database meant to store all the information needed for a set of courses. Imagine tables containing textbook chapters, homework assignments, grading weights, etc.
- 2I have less of an issue philosophically when an employee has monitoring software on their work computer, because adults do know better. I just think that’s poor management, rather than an injustice.