← Back to Future AI-generated artwork competing against human creativity, showcasing MidJourney and other creative AI tools transforming artistic expression
🔮 Future: AI & Creativity

How Artificial Intelligence is Redefining Human Creativity in 2024

📅 March 4, 2026 ⏱️ 6 min read
2024 marked a turning point: artificial intelligence outperformed the average human on creative thinking tests. From painting to music, writing and video, AI tools are fundamentally redefining what creativity means. But what does this actually entail?
Top 1%
GPT-4 on Torrance Creative Thinking Test
10M
Images/day via MidJourney
$250M
Suno funding (AI music)
60 sec
Video from text with Sora

📖 Read more: Antimatter: Energy Beyond Imagination

What Does “Creativity” Mean for a Machine?

Creativity was always considered an exclusively human trait — the ability to combine ideas in original ways, express emotions, produce something new. Today, according to a University of Montana study (2024), GPT-4 scored in the top 1% of all participants on the Torrance Tests of Creative Thinking — tests that measure so-called “divergent thinking,” the capacity to generate multiple, unexpected solutions to a problem.

This doesn't mean AI “feels” or “gets inspired.” It means it can produce outputs that are judged as more creative than those of the average human when evaluated blindly. The distinction between “mechanical creativity” and “authentic expression” is sparking one of the most intense philosophical debates of our era.

MidJourney: When AI Won an Art Competition

In September 2022, Jason Allen won the annual art competition at the Colorado State Fair with a piece created entirely using MidJourney. The event triggered a firestorm: artists condemned the use of AI, while technologists argued that crafting the right prompt requires its own skill set.

Today, MidJourney has evolved dramatically. The platform, which describes itself as a “community-funded research lab,” consists of just 60 people but generates millions of images daily. From photorealistic portraits to abstract art and architectural designs, quality has reached a level where even art experts struggle to tell the difference.

🎨 Key detail: According to MidJourney, their goal isn't replacing artists but “amplifying human imagination.” The company states: "We believe we are all midjourney — that we have a rich past behind us and an unimaginable future ahead — and the question we want to most help answer is: what do we want to become?"

Sora: Video from Text in 60 Seconds

OpenAI unveiled Sora in February 2024, a model capable of generating complete videos up to 60 seconds long from a written description alone. The technology is built on diffusion models combined with transformer architecture — similar to that powering GPT models.

Sora's videos include photorealistic cities, cinematic close-ups, even historical footage. OpenAI stated the model "understands not only what the user has asked for in the prompt, but also how those things exist in the physical world." However, it acknowledges that the model still struggles with the physics of complex scenes, cause-and-effect relationships (e.g., a cookie showing no bite mark after being bitten), and spatial orientation.

📖 Read more: Artificial Photosynthesis: Fuel from Sunlight

"We're teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction."
— OpenAI, Sora Page

Suno: Making Music Accessible to Everyone

Suno, a company headquartered in Cambridge, Massachusetts, developed an AI model that creates complete musical tracks — with vocals, orchestration and production — from a text description alone. In November 2025, the company secured $250 million in funding at a $2.45 billion valuation.

Millions of users worldwide already use Suno to create their own music, regardless of musical background. The company states it is "a music company built around a single purpose: transforming how people create and experience music." It recently signed a partnership with Warner Music Group to build the future of interactive music.

Claude, GPT and the Writing AI

In text generation, LLMs (Large Language Models) have jumped forward. Anthropic launched the Claude 3 family in March 2024 — three models (Haiku, Sonnet, Opus) that, according to the company, demonstrate “nuanced content creation” and “follow complex, multi-step instructions” across more than 159 countries.

These models write poetry, screenplays, essays, code — often at a quality that's hard to distinguish from human writing. But Meta's chief AI scientist Yann LeCun offered a reality check in a January 2024 interview: "LLMs hallucinate, they don't really understand the real world. The vast majority of human knowledge is not expressed in text — it's in the subconscious part of your mind, that you learned in the first year of life before you could speak."

🧠 The gap: A four-year-old child has received 50 times more data through their optic nerve than the largest LLMs consumed during their entire training. That's 20 megabytes/second × 16,000 hours = 10^15 bytes, compared to 2×10^13 bytes for an LLM, according to Yann LeCun.

📖 Read more: Artificial Trees: Machines That Absorb CO2

Ethical Questions and Intellectual Property

These advances create thorny legal problems. If an AI was trained on millions of artworks without the creators' permission, who owns the rights to the output? Dozens of lawsuits are already underway worldwide — from photographers, musicians and writers against AI companies.

OpenAI addresses these concerns with safety mechanisms: content filters, classifiers to detect AI-generated material, and planned integration of C2PA metadata. MidJourney and Suno follow similar patterns, though training data details remain a point of contention.

Where We Stand Today — and Where We're Headed

The situation in early 2026 is clear: on technical tests, AI outperforms the average human in multiple creative domains. But “creativity” isn't just technical proficiency — it's intention, lived experience, context. As LeCun noted: "AGI is not just around the corner — it's going to require some pretty deep perceptual changes."

The market is moving toward a co-creation model: AI as a tool in creators' hands, not their replacement. Anthropic calls its models “creative companions.” Suno wants to “bring the joy of musical expression to every person, everywhere.” The real revolution may not be that AI is becoming creative — but that it's putting creative tools in the hands of billions of people who never had access to them before.

The world is changing faster than we can process. Perhaps the most creative act of this era isn't what a machine produces — but how we choose to use it.

🔗 Article Sources

OpenAI — Sora: Creating Video from Text

Anthropic — Introducing the Next Generation of Claude

Suno — About: Music Doesn't Stop

TIME — Yann LeCun on AGI, Open-Source, and AI Risk

AI creativity MidJourney Sora Suno AI Claude 3 generative AI creative AI tools artificial intelligence art