Our Kids Are Growing Up in the Age of AI: Part 1
Our children, especially teens, are experiencing rapid change as AI applications begin to dominate many areas of learning. They have real reasons to be worried, and we can start supporting them now.
First, a quick note: I’ll be a speaker at a Stoicare online workshop June 24 focused on raising resilient teens! Please sign up to join us by registering on the site!
Next, a topic that’s on my mind a lot lately:
My kids are worried about AI. Can you blame them?
Obviously I’m not talking about the frightening artificial intelligence we see in sci-fi movies (I did encourage them to sit through the end of 2001: A Space Odyssey, and they were understandably nervous about Hal!).
They are on edge about the rise of large language models like ChatGPT and generative AI like Dall-E.
Still developing their own learning and capacities, kids—especially teens—are now being confronted with machines that seemingly could surpass them, before they even get the chance to complete their education. And they aren’t too pleased or excited, from what I can tell. For example: When I suggested that my daughter could look into using generative AI to quickly illustrate the cover of a creative project she was working on outside of school, she responded: no, thanks. She’d rather put her own artistic skills, or her friends’ talents, to work. She’s expressed the same concerns about AI-written stories or novels—it’s scary and dispiriting, as a creative person. She is also not keen about college majors that have begun to focus increasingly on AI.
I get it. This whole AI thing is getting very real, very quickly for all of us, including our kids.
It is already creating both burdens and temptations for students. For example: Professors are accusing college students of using ChatGPT to cheat on their assignments. At Texas A&M, a professor who thought students were using ChatGPT to write papers threatened to fail them, and gave his students an incomplete until they could prove they hadn’t used AI. They were forced to gather evidence to prove their innocence. Anecdotally, I’ve heard other stories of college students being slammed by their instructors as AI-based cheaters, when in fact they were innocent. It turns out the software used to catch the cheating is not as sophisticated as it needs to be (at Texas A&M, the professor was apparently using ChatGPT itself to test if students used AI!). The students in question have had to make their case to their teacher that they didn’t use AI. Another added burden for our already-stressed youth!
Even my daughter, while still in high school, has had to help catch AI-based writing in a volunteer role: At a Model UN session, she and co-organizers ran students’ position papers through Turnitin, a software tool designed to catch AI-based cheating (it’s being used in many classrooms, too). Several papers showed AI origins and were disqualified. Of those who were flagged, one student said: “I didn’t see any rules against using AI to write our papers.” Brazen! In fact, there were rules in the fine print, and organizers told the student that he still participate but wouldn’t be eligible for any awards in the session.
This highlights how tempting it is for students to get a little “extra help” from ChatGPT for writing work—which, of course, is cheating.
It’s already an arms race of AI vs. AI detectors. This week, I learned about a new type of service available online: a software tool that will rewrite your AI-written work to ensure it will pass the AI-detection tests! It’s very blatant too: “Pass AI Content Detection…” the homepage reads. The software “humanizes your content, improving its quality while allowing it to pass as human in AI detectors.” It could not be more obvious, even mentioning 5 specific AI detection tools that it can “pass.” How does it work? With AI, of course: “advanced machine learning models,” the site says. (I’m not going to name the tool here, since I don’t want to be blamed for any unforeseen consequences!)
School work is changing rapidly to avoid AI use. The NY public schools have banned ChatGPT outright. Here in California, my daughter has reported that at her high school, fewer teachers are assigning take-home essays. All senior year, she had just one take-home essay project. Instead, high schoolers are asked to do their writing in class, where they can be observed at work—and they have to write longhand. (I’m just waiting for the days when they bring back the old blue book “technology”: pencil, paper, and a timer as you sit at an uncomfortable student text, scrawling away your long-form answers! Some of us remember being students in the 90s and earlier…)
Even if they are working on tests on their computers online while in class, their teachers require using “LockDown Browser,” which prevents them from opening other windows to cheat. From the software website: “Used at over 2000 higher educational institutions, LockDown Browser is the ‘gold standard’ for securing online exams in classrooms or proctored environments.”
On the flip side, some students report that AI comes in handy for creating study notes and “tutoring” them about new or difficult topics. For studying in school-approved ways, AI could be another tool in the toolbox.
But for their future careers, the picture is not necessarily pretty. The fear about jobs being taken by AI is palpable, especially here in Silicon Valley. First of all, we, as parents, can see it first hand working at tech companies now laser-focused on competing in all kinds of AI development and applications. Second, many students in the Bay Area want to work in tech, the largest and most lucrative employer locally. Kids once thought that if they aimed for jobs in computer science/electrical engineering, they’d be “safe” and well-paid in our economy.
Think again. Now, large language models and coding “assistants” can already write simple code, and they’re getting better at it by the day. From a story in New York Magazine:
GitHub Copilot, a coding assistant developed by Microsoft and OpenAI, “analyzes the context in the file you are editing, as well as related files, and offers suggestions” about what may come next with the intention of speeding up programming. Recently, it has become more ambitious and assertive and will attempt a wider range of programming tasks, including debugging and code commenting.
Software industry analysts say large language models (LLMs) such as ChatGPT are good at writing in a “predictive” way—and that code is a prime candidate for this kind of writing. An OpenAI analysis reported that “around 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected by the introduction of LLMs, while approximately 19 percent of workers may see at least 50 percent of their tasks impacted.”
Outside of strictly tech jobs, how about my own profession as a writer and editor? Copywriters are already being laid off as ChatGPT creates simple text for websites and ads. And if you thought that careers in art and design were safe from AI because they were so creative and couldn’t be done by a machine, you may be mistaken. AI-created art is astonishing and highly customizable. It’s quite easy to make on free generative AI platforms with simple text commands, even for people with zero artistic skill. So even the most “human” of our pursuits are being taken over by computers.
So what to tell our kids being brought up in the age of AI? How do help with their concerns about school and future work, and guide them to AI-proof their career choices?
The short answer is, it’s hard to offer well-informed guidance, in large part because we don’t know yet what AI is capable of. And we don’t know what we, as a society, will be willing to do to control and shape it… or if we will throw up our hands, and simply let it develop based on market forces.
The long answer is, this crossroads of AI should inspire us in two ways: First, to proactively shape the ever-growing hegemony of technology in our own lives, especially tech driven by profit and competition, that is so far largely unregulated by governments. We should, at least, try to take back control as individuals, in a very Stoic sense, of the things in our power. We have to make decisions about how we want to use tech breakthroughs with much more mindfulness, on a personal and societal level. (I realize: That is easier said than done as we sit surrounded by tempting technologies and addictive platforms! But it’s not impossible to carve out time away from tech.)
Second, we need to stay in the moment, aware of the latest developments, and work on being even more HUMAN.
We have to enhance the two things that Stoics said were basic to humans: our faculty of choice, and our pro-social side—that is, how we relate to and work with other humans.
I will have more to say about this, and will drill down further into how we can focus on what’s human and support that in our kids and families, in part 2. More coming soon on this critical topic, with our kids’ futures at the forefront of my mind.
Until then… Let me know: are your kids, or are you, concerned about how AI is advancing, and how it will affect education and the work of the future?
I have been feeling like my children's generation, and the cohort between me and them, are heading full-throttle into a Matrix movie situation. I am worried. All I can do is influence my own children and those I come in contact with to pursue meaningful activities that bring connection. These technologies foster disconnection, discouragement, and despair when they are used without moderation. People are also losing their priority discernment and task completion capabilities by outsourcing it to the bots, and I watch a lot of unhealthy use of the time "freed up" by utilizing AI - if it is freeing one's time to just binge on more mindless entertainment instead of doing something that brings connection & fulfillment then it is promoting vice. I do believe there are virtuous (in the Stoic sense) ways to use AI but I suspect the amount of use of the technology in healthy ways will decrease steadily to the point of being statistically insignificant percentage of the use (if it isn't already there).
As I tell my kids: your screens and AI will never tell you when you are being an overindulged relationship destroying jerk.
Following the default path of the coding will result in you being diminished as a human being if we are not mindful about it, and it is my job as parent to put guardrails on anything that might result my child going off the cliffs of life until the child has developed enough discernment to navigate the context.
Your extensive review of developments in AI and the concomitant concerns is impressive. Thank you for laying it all out so comprehensively and clearly, enabling a non-tech-savvy person to understand it as well as I possibly can.