Last week I had a discussion with a good co-worker on why other colleagues push more and more work to us, to have it verified or reviewed for correctness. This ranges from 7 page white papers that sometimes lack foundation or do not include something new to merge requests that look okay but are overly complicated and do not work out.

On HN I found an article that exactly describes this issue. In „AI-Generated “Workslop” Is Destroying Productivity“ by Havard Business Review the authors try to find reasons on why 95% of organisations say that they can not report measurable return on their AI investment after AI adoption doubled. They identified one possible reason:

Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop“. We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.

They differentiate between people that use AI to polish good work, and people that quickly create content that is actually unhelpful, incomplete or missing crucial context. The consequence is „that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.“

This is exactly what my colleague and I experience. Reading the article makes it so obvious. It is very likely that most of the industry experts can resonate also with the following realisation:

If you have ever experienced this, you might recall the feeling of confusion after opening such a document, followed by frustration—Wait, what is this exactly?—before you begin to wonder if the sender simply used AI to generate large blocks of text instead of thinking it through. If this sounds familiar, you have been workslopped.

This is exactly what we are going through more and more when people are getting pushed to adopt AI everywhere. AI is applied to standard tasks, automating them, making people faster in generating output. But the crucial part still needs to be done by the human - evaluating wether the output makes any sense, provides additional value and actually works.

The transfer of effort from the creator to the receiver becomes a real problem for us. The article cites a finance worker that just sums up my dilemma as well:

It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough.

On HN, one commenter sees Brandolini’s law applied to AI content:

The amount of [mental] energy needed to refute bullshit [AI slop] is an order of magnitude bigger than that needed to produce it.

Having this all mapped out I am still unsure what to do about it now. As a first step, I guess, it is good to ask if some work was vibe coded / AI generated before reviewing it, so that I can brace myself for logic errors and a lot of filler text around key messages. But what can we do to push people to use AI to polish their good work instead of creating AI slop that slows others down? To end on a positive note, the authors summed it up nicely:

Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.

no AI was used to generate the content of this post, only for generating the title image.