Something strange happened in Jeff Hancock’s work.
It was 2022, shortly after Openai released ChatGpt to the Masses, and Professor Stanford realized something was off in the assignment of research he was scoring. “They looked pretty good, but they’re not right at all,” Hancock says CNBC will make it. “And since I had 100 students, I realized that the other 10 assignments looked exactly the same and weren’t on the right as well.”
The paper in question appeared to have many texts without saying anything to “go ahead with the work”, and they all did so in the same overly verbal style.
Kate Niederhofer once felt the same sense of suspicion of sinking when asked to talk about her research, but the request summarises her research in a way that made it clear she didn’t actually know her research.
“It felt like a deep effort,” Niederhofer said, reading the message that he missed Mark. “I’m usually a simple reader. So I (why did you think this was so hard work? Is this so confused?”
Niederhoffer and Hancock have the name of this phenomenon. You have the feeling when reading a complicated or incomplete message or document so that you think, “Wait, did humans write this, is this AI?”
It’s called Workslop and they say it kills teams in all kinds of businesses.
40% of people received their workslop last month
Workslop refers to “work content generated by AI” that “issues as excellent work but does not have any substances that can effectively carry out specific tasks.”
This is according to new research from Betterup, where Niederhofer is the lab’s vice president, and Stanford Social Media Lab, where Hancock is founding director.
It created a situation where I had to decide whether to rewrite it myself, or have him rewrite it, or call it enough.
Like the AI art and features of the so-called Slop Life that came before, Workslop looks familiar for its off-kilter creepy way, but its core has no meaning. Think about it: long, flashy sounds that say nothing, obsolete copy language.
A recent Betterup and Stanford survey of 1,150 full-time US workers found that around 40% of people received their workslots last month. These staff estimate that an average of 15% of the content received is lower, useless, qualified as AI-generated work. It’s happening across the industry, but especially in professional services and technology.
One survey respondent, a financial worker, recalled receiving AI-generated jobs from a colleague led to more jobs.
Another respondent, the retailer director, said they followed up on the information they sent and did their own research. “I then had to waste more of my meeting setup with other supervisors to address the issue, and then I wasted my own time, which required me to redo my work.”
If there are subtle indications of a work slop, including “purple prose,” it’s like using three paragraphs when one bullet point is sufficient.
It can appear in a variety of formats, from bad code to decks with incomplete information, or with emails written in mysterious words, but it has the same effect as adding work to the recipient to understand everything. Ultimately, it can erode trust and productivity.
Niederhoffer judged for himself who would send her workslops. “Why did they do this?” she would wonder. “They can’t complete their work on their own? I don’t trust them. I don’t want to work with them again.”
The end result, she says, is “confusion, nuisance, futile effort, and some serious judgment layers.”
$9 million Workslop Productivity Tax
According to the recent MIT Media Lab Report, AI usage has doubled from 21% to 40% per Gallup since 2023, but 95% of organizations do not see measurable returns on investment in technology. Researchers at Betterup and Stanford say that Workslop could be a big reason.
People who encountered it say they spend an average of 1 hour and 56 minutes dealing with the aftermath. This results in an invisible tax of approximately $186 a month based on self-reported salary.
For an organization of 10,000 workers, it’s a $9 million hit in productivity in a year, researchers say.
(Note that this does not take into account the productivity gains reported by the company or employee.)
With no more effort, I can generate a lot of useless or unproductive content very easily.
Jeff Hancock
Founding Director of Stanford Social Media Lab
There are emotional costs beyond the economic costs. Workslop recipients say it takes time and mental energy to understand how to deal with work diplomatically with colleagues. 53% of reports are frustrated, 38% are confused, and 22% are angry.
When they receive it, people will rethink their colleagues’ capabilities. About half of workers say they consider their colleagues more less creative, more competent and more reliable after receiving worklops from them. About one in each person notifies their teammates or bosses after receiving work generated by the confused AI, saying that similar shares are unlikely to want to work with others.
And while sloppy work has been around forever, AI takes it to another level.
“I still have to put in a fair amount of effort to produce sloppy work. I have to write it. You may not think so, but it still takes effort,” says Hancock. “Now, with no effort, I can generate a lot of useless or counterproductive content very easily.”
The human costs of this phenomenon are driven by “transferring the burden to others without implicitly realizing its impact.” “Because people think of (AI) as a tool for us to work alone, but in reality we are mediating human-to-human work.”
Reducing Workslop
Minimize the work generated by low-quality AI, and all the consequences of that depend on the organization folding the AI, researchers say.
Businesses need to focus on organized approaches to recruit and promote AI in the workplace, Hancock says. He says that without guidance and leadership, workers may act in fear that they will be exchanged for not using AI, but if they do so, they will be judged about it.
Reducing Workslop is a “commitment to the quality of the team’s tasks,” says Hancock. Teams need to spend time talking to each other about how to use AI and criticize the application that best suits their needs.
(ai) may or may not believable, but it is in stark contrast to this truly copy-and-paste mode. Here we forget to have the tools do all the work and reinforce human abilities.
Kate Niederhofer
Betterup Labs VP
You also need to get closer to when and where you are using AI. Let’s say you’ve been pushed into time and used a generated AI chatbot to complete a presentation deck, for example. Telling your colleagues that the job you’re sending is being generated by AI, they can encourage you to what you were working for and what your goal is, and bridge the gap between missing, says Hancock.
Leaders need to focus on human agents and encourage “pilot thinking” to see how tools provide more control in the workplace, says Niederhoffer. Managers should be able to provide specific reasons to use specific AI tools for specific projects and provide clear messages about guidelines, policies and training that come with use, she says.
Niederhoffer says that agencies that are higher than AI are “incredible.”
Do you want to be your own boss? Sign up for CNBC’s Smarter and start a new online course, how to start a business: For first-time founders. From testing ideas to increasing revenue, find step-by-step guidance for starting your first business. Sign up today with coupon code EarlyBird to receive a 30% introductory discount from the regular course price of $127 (plus tax). Valid provisions from September 16th to September 30th, 2025.
Additionally, we request that you sign up for CNBC to connect with experts and peers in our newsletter, money, and life to get tips and tricks for success in the workplace.

