Depending on where you look, somewhere between half and almost all of the internet is now AI-generated. The figures swing wildly because methods differ, but the most careful recent analysis I have seen puts it at roughly three quarters of new webpages that contain at least some AI text. Ahrefs studied 900,000 newly created pages from April 2025 and found that 74.2% showed signs of AI authorship. It’s a sobering baseline. The default web page today is compiled content, and no longer written by a person.
If you feel the web has begun to drone, you’re not imagining it. Generators create prose that is technically fluent, semantically average, and spiritually absent. This was always the risk when tools trained to predict the next word became our most enthusiastic writers. But it’s worth exploring if the danger is really in the tools, when people will publish their first drafts as if it were their tenth. When after a string of approvals, multiple shot callers deem something “good enough” in the name of speed.
If human judgment is deployed in green-lighting mediocre content, is the problem really bad AI output… or bad taste?
The real bottleneck is judgment
You can tell a story about the media’s AI turn with one headline. Business Insider cut roughly one fifth of its staff in May, while leaning into AI tools and live events. The company framed the revamp as a response to pressure on traffic and an embrace of AI for efficiency. Five years ago, MSN laid off dozens of journalists and has since been using AI software to create the content. If publishers and media companies, considered the vanguard of top-tier content, are AI optimistic, then the output can’t be all that bad.
We put the blame on AI’s shortcomings for today’s bad content because it absolves us from confronting our own. But it wouldn’t matter how elegant the tool is, or how well you even use it, if you haven’t developed the intuition for what makes relevant and thought-provoking content. That intuition is a result of mastery.
When Photoshop went mainstream, so much content similar to what we now call AI slop came out. We saw retouch overkill, missing limbs, unrealistic body shapes, and downright botched imagery all published in magazines and even billboards. That means someone took a look at those images and said, “that’ll do, print it!”
After much pushback from the masses, what’s surprising is that users didn’t abandon Photoshop. Instead, they resolved to get better at using it. Users also started to recognize what makes good photos so that they can fiercely avoid mistakes that lead to bad ones. Eventually, the software was embraced widely because it delivered results and efficiency, and you became a nuisance if you were stuck up about Photoshopped images. Still, that hasn’t stopped a new wave of weirdly edited photos from surfacing because people can’t help it. The difference is that, now, anyone can look for tell-tale signs like warping, inconsistent lighting, strange shadows, or repeated patterns. Because of that, better judgment is exercised to put out better content. Just because it’s Photoshopped, doesn’t mean it can’t be elegant. And that elegance is a result of proper judgment applied to a tool.
That same judgment is used when deliberating on AI-generated or AI-assisted content. LLMs excel at synthesis. They can compress literature into a paragraph, a dataset into a sentence, a mood board into a palette. What they struggle with is ideation that breaks a line of tradition. Novel ideas demand risk and context. That still starts with people. Granted, we might have one good idea for every ten poor ones. But the working skill is not conjuring the one. It is discarding the ten.
Is taste subjective? What’s the standard?
People like to dodge the taste question by calling it subjective. Yes, taste has subjectivity, but it also has consensus. In design, food, music, and writing, communities build shared standards over time. You can learn those standards and practice them. You can also cultivate the inner meter that tells you when a thing is not an interesting thing.
There are zones where good taste seems unnecessary. Meme culture looks like one of them, but even the best memes depend on cultural fluency. They rely on references, tone, and communal timing. As mindless as this content category may seem on the surface, people still feel it when a meme lands perfectly.
Natalie Stoclet wrote a Forbes piece that still circulates inside design teams. She says the best taste “introduces a natural quality to the adoption of what is considered good taste. A kind of taste that can’t be taught, bought or borrowed.” The paradox is instructive. You can study the canon and still miss the point. But you can develop taste through stubborn, attentive living.
A field guide to your own taste
Ask yourself a few questions the next time you produce an AI draft:
- What new insight, feeling, or utility does this deliver that I didn’t have before engaging with it?
- Where should this create texture through pacing, silence, negative space, visual contrast, or narrative turns, and does it actually do so?
- What distinctly human judgment did I add in structure, sequencing, or emphasis that a competent tool would not have contributed?
- Which single element would I cut with zero regret to improve clarity and flow?
These checks might seem like a tedious step that trashes the point of using AI, but this is one habit that could move your work from mediocre to worthwhile. If you’re gonna do something, do it right. Or at least get it 80% there.
The cognitive bill is coming due
The balance we need to strike with AI is learning how to use it while keeping our critical thinking intact. A new set of studies suggests something uncomfortable: Heavy reliance on conversational AI may dull the part of the brain that carries critical engagement during tasks.
An MIT Media Lab experiment with 54 participants compared three groups that were assigned writing tasks. One wrote with an LLM. One used search. One wrote unaided. The LLM group showed the weakest neural engagement and produced essays that were more formulaic and less original. The brain only group scored higher in executive control and reported a greater sense of ownership. The sample was small and the work is early, but the direction is clear. Overuse of the tool can reduce the very capacities that support good judgment.
It would be easy to overreach here. Tools also scaffold learning when used with intent. The study itself notes that balanced use can help. The point is precaution. If you spend all day accepting the first pass, you may forget how to recognize a second pass that matters.
Educators are already wrestling with a related question. If students rely on generators for schoolwork, what happens to the formation of discernment? Recent reporting on classroom policy shows a pendulum swing back toward in-person writing to discourage AI mediated shortcuts. That is a crude tool for a cultural problem, but it highlights the same fear professionals have. If we stop exercising judgment, we lose it.
The research community offers a caution light rather than a stop sign, however. The MIT Media Lab group also noted that balanced use and switching conditions can restore healthier activity. So, the message is not to ban models, but to retain agency. You are allowed to take the longer path when the work merits it. That is how taste survives.
What happens to taste when you do the work
The more time you spend inside a craft, the better your intuition grows around excellent work. Skilled writers recognize flat sentences in any medium. Skilled animators notice broken easing in a second. Skilled audio engineers hear the wrong reverb before the chorus arrives. Skill oxygenates taste.
The gift of AI is that it removes certain kinds of friction. The risk is that it also removes the hours that would have made your taste sharper.
I’m not telling you to stop using AI at all. But the little bit of thinking you do upfront helps your output a ton.
Here’s my suggested workflow for keeping your edge. It is skewed more towards written content, but you can take the principles and apply it to your own medium. Personally, it helps me become more involved in the output, therefore, more proud of it:
- Separate ideation from synthesis. Use AI to gather, summarize, and transform. Keep ideas in a notebook or a walk. Return to the machine only after you can articulate the thesis in one sentence a stranger would understand.
- Write a human zero draft. Even if it is rough, put down your own words before you prompt the model to smooth anything.
- Interrogate the model like a colleague. Ask for the strongest counterexample, the nonobvious source, the paragraph with the highest information density.
- Edit twice, publish once. The first edit improves clarity. The second edit improves readability and rhythm. On the second pass, remove any sentence that sounds like it came from a template generator.
- Add a friction ritual. Read the piece aloud. Print it. Change the medium so your eye stops skimming.
- Create a kill list. Maintain a private set of clichés, structures, and transitions you will not allow. The list grows with experience.
How to know if you have good taste
Taste is not a halo. Some people carry an instinct for form and feeling. For the rest of us, it is a track record built across formats. Have you developed the eye for good content? Try these signals:
- You predict reception with uncomfortable accuracy. Your calls about what will land match the audience more often than chance.
- You cut what others fear to cut. The kill list grows and the work gets tighter.
- You can argue the other side of the creative choice. You’re mindful of your bias. You test alternate beats, camera angles, transitions, rhythms, and interactions before selecting the one that serves the idea.
- Your references travel across disciplines. You draw from film grammar, music theory, design systems, literature, product ergonomics, and cultural history because you actually study them, then translate those lessons into the format at hand.
- People seek your notes across teams. Not for approval, but because your feedback improves pacing, structure, clarity, and emotional payoff in whatever medium they are working in.
Make a practice of collecting three exemplars a week in your field, write down why they work in precise terms, and imitate one move from each in your next piece. Do this for a year. Eventually, you’ll develop your own unique voice and approach.
What leaders owe their teams
Executives cannot outsource taste to a prompt library, though many are under well-established protocol to try. If you run a newsroom, a studio, or a brand, the better move is to build a culture where speed never outruns discernment.
More organizations will formalize AI as an operational core. The ones that win will not be the ones that adopted first. The advantage belongs to teams that enforce a higher bar, not just a faster pipeline.
Consider a few policies that have traveled well:
- Declare AI as an assistant, not a source. A model can suggest a line of argument. It cannot be your reporting.
- Institute a human standard gate. No AI-assisted piece moves to publish without a named person who certifies that it meets the organization’s quality bar.
- Audit the audit. Sample outputs monthly for hidden repetition, template drift, and information density. Track the metrics that correlate with reader trust, not just clicks.
- Pay for slowness. Reward teams that refuse to ship work that is almost good. Slowness can be a cost, but it can also be a moat.
- Invest in taste education. Study the canon of your field together. Host postmortems that analyze why a piece worked. Bring in outside critics who can bruise you kindly.
Synthesis as a superpower, originality as a responsibility
Let’s give AI its due. Synthesis at human scale is rare. Most of us can’t read ten thousand documents and pull a coherent summary by next Tuesday. Tools can. They surface patterns, generate drafts, and reduce the cost of trying multiple structures. The fantasy is believing that it makes originality optional. Machines can repaint old lines beautifully. People are still responsible for drawing new ones.
It pays to return to the old questions: Does this work surprise me? Does it carry a perspective? Does it show its homework without reading like homework? Keep asking until your sense of quality starts to feel instinctive. That instinct is just pattern recognition earned the long way. And as AI grows better at imitation, that kind of human instinct, or taste, becomes the rarest asset left. When output converges, sharp judgment is the only real edge.
In the end, we curate what goes out the door. If the output reads robotic, inauthentic, and bland, some reflection is due for the person who said yes.
Does AI deserve a chance? It’s already earned it. The real question is, is your taste good enough to earn the audience you want? ●