The Trump administration has made artificial intelligence a centerpiece of its economic agenda, promising to retrain a workforce it says must be ready to compete in an AI-driven future. One early piece of that effort: a free text-message course from the Department of Labor (DOL) and private partner Arist called, “Make America AI-Ready”, is a useful start on the journey to AI literacy for all Americans. This seven-day long, 10-minute-per day course which frames itself as “your AI 101” is accessible, technically informative, and engaging (see below for the full contents). Here we analyze its strengths, lay out a few weaknesses we think should be addressed in the current version, and elaborate some stretch goals for an “AI 201” course that would build upon the original.
We evaluated this course on technical accuracy; framing and ideology; scope; commercial entanglements; agency; and pedagogical quality.
It’s accessible. The choice of SMS for delivery maximizes reach. It meets people where they are, requiring no app installation, account creation, or navigating unfamiliar web platforms. The 10-minute-a-day pacing is practical.
It emphasizes verification of AI outputs. The course consistently emphasizes that AI output must be checked, not blindly trusted. The example of looking up a restaurant only to find out that a nail salon has opened in its place is memorable (Lesson 6, below). The course also thoughtfully extends this skepticism to AI-generated images, video, and audio.
It centers human responsibility. The quiz question about a coworker submitting an AI-generated report with fabricated statistics (Lesson 2, below) returns a sensible response: the human is responsible. This is repeated throughout the course and is one of its most important messages.
It’s honest about AI’s limitations. The course doesn’t shy away from the fact that AI can be confidently wrong. The term “hallucination” is introduced clearly, the concept of training data cutoffs is explained, and the course repeatedly emphasizes that AI predicts rather than knows or understands. For a 101-level course, this is appropriately calibrated.
There are some things we’d recommend fixing about the course.
The course contains a serious inconsistency when it comes to data privacy and security. On the last day of the course it offers common-sense advice, stating “PROTECT your private info. Never share passwords, Social Security numbers, medical records, or confidential work data with AI tools,” later adding not to share “income data.” But some of the advice and exercises leading up to that point had already prompted users to input some of these “never share” types of data.
These self-contradictions expose a central tension: AI tools can be more useful when they know more about you, so a blanket prohibition against sharing private information will limit their usefulness. Unfortunately, there is no simple answer to the question of how to protect your privacy when using AI, and there is no single approach that will work for everyone. It requires critical thinking based on an understanding of different threat models, including prompt injection risks, traditional cybersecurity risks, legal risks, AI companies’ eagerness to train on user data, and workplace policies that of course vary between organizations.
We recognize that this level of nuance would be too much for an introductory course. We would recommend that the privacy protection lesson come earlier in the course, and include information about privacy settings that AI tools offer, such as temporary or incognito chats. Instead of the “never share” language, giving people at least a rudimentary understanding of what could go wrong would be more helpful, along with links to resources where they can learn more.
The quiz questions often ask the user for an explanation of AI’s failure modes and social effects. While it is important to face these head-on, the questions consistently have one “obviously correct” answer that maps to the course’s framing. Several wrong answers are absurd strawmen (“AI likes making things up to test you,” “AI’s internet connection was slow”). This limits the potential to build genuine understanding or critical thinking about AI’s functioning and societal implications.
We would recommend an approach that highlights known issues without pretending that the explanations are simple. Flexibility in how issues are framed will allow course participants to grapple with them in a manner that is relevant to the skills they are building. More open-ended quiz questions might include: “Your employer starts mandating that all workers use AI. This may enable your employer to monitor your productivity. What are your options?” or “You are about to apply for a loan. How can you find out whether and how AI will be used in evaluating your application?”
Expanding upon the introductory materials in the 101 course, there are several opportunities for content development that we would recommend.
For a course that is offered by the Department of Labor, there is very little content on the subject of work — the course frames AI solely as a productivity tool workers can use. The Department of Labor exists to protect workers, their wages, their safety, and their rights, yet the course largely skips over the ways AI is already reshaping hiring, performance monitoring, and layoffs of workers across many sectors.
An AI 201 course could provide more information on these, and inform citizens of legitimate reasons they may have to call for regulation. It could also go into more depth on the privacy question. Finally, AI 201 could reckon with the broader societal consequences of this technology: for instance, bias, surveillance, and the concentration of power in the hands of a few large technology companies. Workers who understand these dynamics are not just AI-literate; they are better equipped to advocate for themselves.
The 101 course keeps its terminology simple, which is important. But sometimes it oversimplifies. An AI 201 could deepen the explanation of how models are trained, make inferences, and deliver human-interpretable results.
The course’s technical explanation — AI finds patterns and makes predictions — serves as the entire mental model. This framing makes AI sound more mechanistic and less opaque than it actually is. On day 3, the language of pattern and prediction drops out, with the language of “instruction” and “results” substituting in for the human input and predicted output of AI. The current course also equates predicting with guessing and AI training with “studying” – analogies that might be a useful starting point, but are quite limiting.
For an AI 201 course, the connections between AI learning, model weights and predictions – as well as the connections between all of these things and the results generated from instructions – could be deepened. Indeed, how AI can be biased, can hallucinate, and otherwise can make errors is easier to comprehend when one understands a bit of the math behind machine learning.
The quizzes in AI 101 are based on reputable learning science. Often the quiz will introduce a new concept or ask the user to stretch what they just learned to cover a new situation. There’s good evidence to think that this sort of “pre-assessment,” followed quickly by lessons teaching the correct answer, does improve retention in general.
But as we said the AI 101 quiz questions consistently have one “obviously correct” answer that maps to the course’s framing, limiting the potential to challenge the user’s understanding. Additionally, we found minimal tailoring of text-message responses to the user’s quiz answers, despite the affordances of the interactive platform. If one user selects what is considered a right answer while another selects a wrong one (we tested this), the course responds with similar if not identical information. Better quizzes in AI 201 could perhaps be assessed by an LLM, with adaptive responses that meet the user where they are, and stretch their understanding when they’ve acquired a solid base.
The daily challenges in AI 101 (Quick Draw, Udio music generation, fridge photo recipes) are well-designed to get people past the intimidation barrier. They’re low-stakes, fun, and demonstrate AI capabilities concretely. But for AI 201 they could be more effectively leveraged to actually show people how AI can be useful in their work and daily lives, and can (as promised by AI 101) “save them 5 hours per week”.
The DOL’s press release announcing the course points to a collaboration with a private partner called Arist. Arist’s website at the time of writing states that “Arist is the #1 enablement AI. Arist’s agents orchestrate creation, delivery, and analytics, end-to-end.”
While the DOL announcement gives little detail as to the nature of the collaboration, if the company co-developed actual course content using generative AI this fact should be disclosed. One of us ran selected course content through Pangram, a tool which purports to detect AI content, and the results came back suggesting it was 100% AI-generated. Without putting too much stock in that, we began to suspect that some of the faults in the course could be explained this way. The simplistic framing of how AI generates results (patterns/predictions, instructions/results) could come from AI: since LLMs are trained on old explanations of how LLMs work, they may reach for framings that are not up-to-date. Also, if each module/quiz was generated separately, that could explain abrupt changes in terminology and the contradictions we identified regarding the sharing/not sharing of private information. The use of AI for content creation isn’t a problem per se; but the failure to disclose left a missed opportunity for a teachable moment on the utility and risks associated with generative content. Also, the contradictions in regards to security and privacy, which we discussed earlier, should have been caught by human oversight.
Additionally, going forward, transparency about how commercial partners are involved can lend itself to wider adoption and trust of course materials and DOL initiatives. The final lesson of the course refers users to an Arist-sponsored AI summit featuring Tony Robbins and Dean Graziosi. While the Summit appeared to be free, it raises the question of what other paid AI-enablement sessions or products these well-known coaches might offer. Graziosi has drawn attention for his role in other problematic training programs. Users deserve to know who benefits from pursuing the recommendations made by a Federal agency.
Make America AI Ready offers significant insight into the priorities the Federal government holds in reaching widespread AI-literacy across the United States workforce. Although we suggested several areas for development, the course content and manner in which it was released are a useful start in achieving this aim.
Complete text of all 7 lessons. Click a lesson to expand.
Core Content
AI is already working for you. You probably used AI before you finished your morning coffee today and didn’t know it.
How it works: AI (Artificial Intelligence) is a system that looks at massive amounts of data, finds patterns, and makes predictions. That’s the secret: PATTERNS IN, PREDICTIONS OUT.
There’s a newer kind of AI called GENERATIVE AI that doesn’t just predict; it creates. Give it short instructions (aka “prompt”), and it can draft an email, explain a confusing form, or help plan your week.
When most people say “AI” today, they’re usually talking about generative AI. In this course, “AI” is used broadly, but the main focus is generative AI.
Q1: Imagine your friend says, “AI knows everything, it’s like a genius that’s always right.” What’s the best response?
Q2: Which is the BEST example of generative AI at work?
Check out the Quick Draw game by Google. Draw what it asks and watch AI guess what you’re sketching in real time.
Core Content
AI learns like you do (minus the smoke alarm). Here’s the process in 3 steps:
Step 1: STUDY — AI studies millions of examples (websites, how-to videos, artwork, recipes) and looks for patterns. There are patterns in everything: how Shakespeare structures his plays, how Justin Bieber writes his lyrics, how recipes mix ingredients.
Step 2: PREDICT — After studying, AI makes a smart guess about the answer. AI doesn’t “know” the answer. It predicts. The system doing this pattern-finding and predicting is called a NEURAL NETWORK.
Step 3: IMPROVE — After guessing, AI is rewarded for correct answers and learns from mistakes. When you like a response, it learns that approach worked. When you don’t, it learns to try something different.
This whole process is called MACHINE LEARNING. It’s learning from trial and error, at a scale and speed no human could match.
Q1: You ask AI to “explain budgeting tips using a cooking analogy.” What is AI doing?
Q2: If AI’s learning process is spotting patterns, predicting what comes next, and improving with feedback… what’s the best way to work with AI?
Q3: AI sometimes states things that are false with total confidence. Why does this happen?
Q4: A coworker uses AI to write a report and submits it without reading it. The report contains a made-up statistic. Who is responsible?
Open a free AI tool and use this prompt: “Roast me in the format of a movie trailer voiceover: I’m a [role], from [place], and my biggest work pet peeve is [x]. Be very funny (appropriately).”
Core Content
AI is powerful, but it’s not psychic. If you give it a vague instruction, you’ll get a vague result.
The instruction you give AI is called a PROMPT. Think of it like ordering food:
AI works the same way:
The difference isn’t luck. It’s the prompt. Better results are completely in your control.
Q1: Why does the quality of your prompt matter so much?
Q2: A friend says, “I tried AI once and it gave me useless junk.” What’s the most likely problem?
Q3: How should you think about communicating with AI?
Create a funny song with AI using Udio. Describe someone you know, pick a genre, and add fun details.
Core Content
A prompt isn’t just a question. It’s a set of instructions. A strong prompt has 3 parts:
Think of it like calling a contractor:
The more specific your prompt, the less time you spend going back and forth with AI. One great prompt can save you ten “that’s not what I meant” follow-ups.
Q1: Which prompt would give you the best result for understanding a long report?
Q2: You’re planning a trip on a tight budget. Which prompt will get you the most useful result?
Q3 (Open-ended): “Write a workout plan.” What’s one way you could improve it?
Core Content
AI can help with things that feel overwhelming: step-by-step repair guidance, breaking down confusing forms into plain language, learning the basics of an unfamiliar task in minutes.
One AI tool can play all 5 roles. You just change your prompt.
Q1: What’s the best example of AI complementing your skills?
Q2: You ask AI for advice on a medical symptom. It gives you a detailed, confident answer. What’s the best approach?
Q3: A small business owner drafts a social media post with AI. The draft is good but generic. What should they do?
Q4 (Poll): If you had AI as an assistant for one week, which role would you assign first?
Think of a challenge you’re facing at work. Drop it into your favorite AI tool, add details, and say: “Give me 3 ways to approach this. Put the pros and cons in a table and recommend next steps.”
Core Content
You’ve learned how to talk to AI. Now let’s talk about what to do with what it gives back. Great results come from reviewing, refining, and improving. This applies to both what you generate with AI and what you see online.
Q1: When evaluating AI’s output, what are the best questions to ask? (Select all that apply)
Q2: You ask ChatGPT for the best Mexican food near you. You show up hungry. It’s a nail salon now. What happened?
Q3: AI plans a week of dinners but it requires ingredients you don’t have. What should you do?
Open your go-to AI tool and ask: “What are the best restaurants in [your area]?” Then iterate: redirect, fill gaps, clarify, tighten, and fact-check the results.
Core Content
AI is a power tool, not a magic wand. How you use it matters. Being a smart AI user comes down to 4 things:
Q1: You prompt an AI tool and it gives you a detailed response. What’s the most responsible next step?
Q2: Which of these should you NOT enter into an AI tool?
Pick 1 area of your work or life where AI can genuinely help. Open your favorite AI tool and describe the area, then add: “Create a plan for how I can use AI to help me with this. Include the use cases to start with, simple ways to make it stick, and next steps to get started. Tailor it to someone who [describe your job/life].”