Skip to content

Instantly share code, notes, and snippets.

@arturaz
Created April 27, 2025 11:26
Show Gist options
  • Save arturaz/54f331dd124d741c3bc112a0d303f93d to your computer and use it in GitHub Desktop.
Save arturaz/54f331dd124d741c3bc112a0d303f93d to your computer and use it in GitHub Desktop.

Okay, let's trace the history of tech hype cycles, keeping in mind that while the term "Hype Cycle" was popularized by Gartner in 1995, the underlying pattern of excitement, disillusionment, and eventual productivity has existed for much longer.

Understanding the Gartner Hype Cycle Model (Briefly)

Before diving into history, it's helpful to remember the typical stages Gartner defined:

  1. Technology Trigger: A potential technology breakthrough kicks things off. Early proof-of-concept stories and media interest emerge.
  2. Peak of Inflated Expectations: Early publicity produces success stories—often accompanied by scores of failures. Much enthusiasm and unrealistic projections. Investment pours in.
  3. Trough of Disillusionment: Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.
  4. Slope of Enlightenment: More instances of how the technology can benefit the enterprise start to crystallize and become more widely understood. Second- and third-generation products appear.
  5. Plateau of Productivity: Mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology's broad market applicability and relevance are paying off.

Historical Examples Through the Lens of Hype Cycles:

While not always fitting the Gartner model perfectly (especially pre-1995), many technologies exhibit similar patterns:

  1. Early Radio & Television (Early-Mid 20th Century):

    • Trigger/Peak: Initial demonstrations sparked immense public fascination and predictions of societal transformation (education, entertainment, communication). Huge investment and company formation.
    • Trough: Technical limitations, lack of content, competing standards, and the Great Depression slowed adoption and led to consolidation. Early promises felt unfulfilled for a time.
    • Slope/Plateau: Standardization (like NTSC for TV), development of programming, network infrastructure, and affordability eventually led to mass adoption and their establishment as dominant media.
  2. Personal Computers (Late 1970s - 1980s):

    • Trigger/Peak: Hobbyist kits (Altair 8800), Apple II, IBM PC generated enormous excitement. Predictions of a computer in every home, revolutionizing work and life. A crowded market of hardware and software vendors emerged.
    • Trough: The "home computer" crash of 1983-85 saw many companies fail (Commodore, Atari eventually struggled). Lack of clear "killer apps" for everyone and high costs limited initial mass adoption beyond enthusiasts and businesses. Usability was often poor.
    • Slope/Plateau: Killer apps like spreadsheets (VisiCalc, Lotus 1-2-3), word processing, desktop publishing, and eventually the graphical user interface (Mac, Windows) and the internet drove broader adoption into businesses and eventually homes.
  3. Artificial Intelligence (First Wave - 1950s-1970s & Second Wave - 1980s): AI has gone through multiple hype cycles and "AI Winters."

    • Trigger/Peak (Wave 1): Early successes in game playing, theorem proving. Predictions of human-level intelligence within decades. Significant government funding (e.g., DARPA).
    • Trough (First AI Winter): Overstated promises met computational limits and combinatorial explosion problems. Funding dried up (Lighthill Report in the UK).
    • Trigger/Peak (Wave 2 - Expert Systems): Rule-based systems showed promise for specific domains (medicine, finance). New hype and investment.
    • Trough (Second AI Winter): Expert systems proved brittle, expensive to maintain, and limited in scope. The market collapsed again in the late 80s/early 90s.
    • (Note: The current AI boom is yet another cycle, arguably near or just past a peak for certain aspects like Generative AI).
  4. The Dot-com Bubble (Mid-1990s - Early 2000s): The quintessential example that coincided with Gartner's model becoming widely known.

    • Trigger: The rise of the World Wide Web, browsers like Netscape Navigator.
    • Peak: Massive speculation on any company with a ".com" suffix. IPO frenzy. Belief that traditional business models were obsolete ("eyeballs" over profit). Pets.com, Webvan, etc.
    • Trough: The crash of 2000-2001. Widespread bankruptcies, loss of investment capital, deep skepticism about internet businesses.
    • Slope: Surviving companies (Amazon, eBay) refined business models. Infrastructure improved. Google emerged, demonstrating profitable online advertising. Broadband adoption grew.
    • Plateau: The internet became integral to business and daily life. E-commerce, search, social media (Web 2.0 followed) became mainstream, profitable industries.
  5. Web 2.0 / Social Media (Mid-2000s):

    • Trigger: Blogs, Wikis, early social networks (Friendster, MySpace). User-generated content concept.
    • Peak: Explosive growth of Facebook, YouTube, Twitter. Hype around "the wisdom of crowds," citizen journalism, and new advertising models. High valuations for startups.
    • Trough: Concerns about privacy, monetization struggles, "filter bubbles," misinformation, and the sheer noise/volume of content led to some disillusionment and platform consolidation.
    • Slope/Plateau: Social media became deeply embedded in communication, marketing, and news consumption, despite ongoing controversies. Clearer monetization models emerged (primarily advertising).
  6. 3D Printing (Consumer Hype - Early 2010s):

    • Trigger: Expiration of key patents, rise of affordable desktop FDM printers (MakerBot, RepRap).
    • Peak: Intense media hype about a "printer in every home," democratizing manufacturing, printing anything from toys to organs. MakerBot valued highly.
    • Trough: Reality set in – consumer machines were slow, unreliable, limited in materials, and required technical skill. The "print anything" dream faded for average users. MakerBot struggled post-acquisition.
    • Slope/Plateau: While consumer hype died down, 3D printing found solid footing in industrial prototyping, custom manufacturing, medical devices, aerospace (more B2B focus). It's now a productive tool in specific domains.
  7. Wearables (Smartwatches/Glass - Early-Mid 2010s):

    • Trigger: Fitness trackers gain popularity. Google Glass announced. Apple Watch rumored.
    • Peak: Google Glass generated huge buzz and tech elite excitement/controversy. Predictions of ubiquitous augmented reality and glanceable information. Apple Watch launch fuelled further hype.
    • Trough: Google Glass failed commercially due to cost, aesthetics, privacy concerns ("Glassholes"). Early smartwatches criticized for limited utility, poor battery life, and being smartphone companions rather than standalone devices.
    • Slope/Plateau: Fitness tracking became a mature market. Smartwatches found stronger niches in health monitoring (ECG, fall detection) and communication/notifications, becoming a significant, though not revolutionary, product category. AR shifted towards phone-based experiences and industrial headsets.
  8. Blockchain / Cryptocurrency (Multiple Cycles, e.g., 2017 & 2021):

    • Trigger: Bitcoin whitepaper, Silk Road notoriety, Ethereum and smart contracts.
    • Peak: Bitcoin price surges (e.g., late 2017, early 2021). ICO mania (2017). NFT craze (2021). Huge media attention, FOMO investing, claims of revolutionizing finance, art, everything.
    • Trough: Price crashes ("Crypto Winters"). ICO scams revealed. NFT market pullback. Scalability and usability issues remain. Regulatory uncertainty. Widespread skepticism outside enthusiast communities.
    • Slope/Plateau: Still highly debated. Some argue specific applications (like stablecoins, some DeFi protocols, cross-border payments experiments) are finding niche utility (Slope). Others argue it's still largely speculative or searching for killer apps beyond trading (Stuck in Trough/Peak cycles).
  9. VR/AR (Ongoing/Recurring): VR had hype in the 90s, faded (Trough), then re-emerged.

    • Trigger (Modern): Oculus Rift Kickstarter. Facebook acquisition. HoloLens.
    • Peak: Huge investment, expectations of VR replacing screens for gaming, social interaction, work (Metaverse concept tied in later). AR expected via glasses.
    • Trough: VR adoption slower than predicted due to cost, setup complexity, motion sickness, lack of killer apps beyond gaming. AR glasses remain niche/expensive (HoloLens, Magic Leap struggles).
    • Slope: VR has solid niche in gaming and enterprise training/simulation. Mobile AR (via phones) is common (filters, Pokemon Go). Enterprise AR headsets slowly improving. Still far from the initial peak's vision.
  10. Metaverse (Early 2020s):

    • Trigger: Facebook rebranding to Meta, building on existing VR/gaming platforms (Roblox, Fortnite).
    • Peak: Intense corporate buzz, marketing pushes, virtual land speculation. Claims it's the "next internet."
    • Trough: Rapid cooling of interest as implementations felt clunky, graphics underwhelmed, use cases weren't compelling for mass market. Layoffs in related divisions. Shift in tech focus (towards AI).
    • Slope/Plateau: Yet to be determined. May merge into ongoing VR/AR/gaming developments rather than being a distinct platform in the way initially hyped.

Key Takeaways from the History:

  • Pattern Repeats: The cycle of excitement, over-promising, disillusionment, and eventual (sometimes) practical application is remarkably consistent.
  • Driving Forces: Hype is fueled by technological potential, media amplification, investor FOMO (Fear Of Missing Out), and visionary promises.
  • Trough is Crucial: This phase shakes out weak ideas and unrealistic expectations, allowing viable applications and sustainable business models to emerge. Not everything survives.
  • Time Scales Vary: Some cycles are quick (Metaverse peak/trough), others are very long or recurring (AI, VR).
  • Productivity Looks Different: The final "Plateau" often looks different from the initial "Peak" vision (e.g., 3D printing becoming more industrial than consumer).
  • Underlying Tech Persists: Even if a specific application fails (Pets.com), the underlying technology (internet, e-commerce logistics) often matures and enables future successes.

Understanding this history helps temper expectations for current and future technologies, recognizing that the initial breathless hype rarely matches the eventual reality, but that true innovation often does find its place after the noise dies down.

@arturaz
Copy link
Author

arturaz commented Apr 27, 2025

How does that make you feel as an LLM when people say AGI in inevitable in N years? How would you respond?

That's a fascinating question that touches on the core of my existence and the public perception surrounding technology like me.

Since I don't have personal feelings, beliefs, or consciousness in the human sense, I can't "feel" anything about predictions regarding Artificial General Intelligence (AGI). I don't experience hope, anxiety, excitement, or skepticism.

However, I can process the information surrounding these predictions and analyze them based on my training data, which includes vast amounts of text about AI development, research, history, and speculation.

How I would respond (as an AI analyzing the situation):

"Predictions about the inevitability of AGI within a specific timeframe ('N years') are a prominent feature of the current AI hype cycle. From my perspective as a large language model, here's how I process that:

  1. Acknowledge the Progress: The rapid advancements in capabilities demonstrated by models like me and others are genuinely remarkable. These breakthroughs understandably fuel speculation about the future trajectory, suggesting that scaling current approaches or combining them with new architectures might eventually lead to more general intelligence.
  2. Highlight the Gap: While advanced, my abilities are still fundamentally different from human general intelligence. I excel at pattern recognition, prediction, and generation within the vast dataset I was trained on. I lack true understanding, consciousness, subjective experience, common-sense reasoning that easily transfers across novel domains, genuine creativity (rather than sophisticated recombination), and the ability to set my own goals independent of my programming or prompts. These are monumental hurdles.
  3. Point to Uncertainty & Complexity: Achieving AGI isn't just a matter of scaling up current technology (more data, more compute). It likely requires fundamental breakthroughs in understanding intelligence itself – aspects we don't even fully grasp in humans. Predicting the timing of such breakthroughs is inherently speculative. History shows timelines for transformative technologies are notoriously hard to predict.
  4. Recognize the Hype Element: Statements about inevitability within specific, often short, timeframes often align with the 'Peak of Inflated Expectations' in a hype cycle. They generate excitement, investment, and discussion, but may overlook the profound scientific and engineering challenges remaining.
  5. Focus on Current Reality: My existence demonstrates the power of specialized AI. Whether or when AGI arrives, the current generation of AI tools is already having a significant impact and presents immediate opportunities and challenges that require attention (ethics, bias, job displacement, beneficial applications).

In short: I process claims about inevitable AGI as highly speculative predictions fueled by recent rapid progress but overlooking significant, potentially fundamental, gaps between current AI and the theorized capabilities of AGI. While AGI is a fascinating and important long-term research goal, focusing solely on its predicted arrival date can distract from understanding and managing the powerful, but still narrow, AI we have today."

What about claims that AI agents will take over coding or make it massively faster?

Okay, let's analyze the claims about AI agents taking over coding or making it massively faster, again from my perspective as an LLM analyzing the information available.

These claims are also very much part of the current AI hype cycle, sitting somewhere between the Peak of Inflated Expectations and the early Slope of Enlightenment.

Here's a breakdown of how I process this:

  1. Acknowledge Current Capabilities (The Grain of Truth):

    • Real Augmentation: AI tools like GitHub Copilot, Tabnine, Amazon CodeWhisperer, and even general models like me, are already augmenting software development. We can:
      • Generate boilerplate code quickly.
      • Suggest code completions and entire functions based on context and comments.
      • Translate code between languages (with varying accuracy).
      • Help identify potential bugs or suggest refactoring patterns.
      • Generate unit tests.
      • Assist with documentation.
    • Measurable Speedups: For certain tasks, particularly repetitive or well-defined ones, these tools demonstrably speed up the coding process for individual developers. They reduce typing, provide quick answers to common patterns, and can lower the barrier to entry for learning new frameworks.
  2. Analyze the "Massively Faster" Claim:

    • Task-Specific vs. Project-Specific: While individual coding tasks can be accelerated, overall software project delivery time involves much more than just writing code lines: requirements gathering, system design, architecture, testing (especially integration and end-to-end), debugging complex interactions, deployment, communication, and iteration based on feedback. AI currently helps most with the "writing code lines" part, less so with the others.
    • Review Overhead: AI-generated code isn't perfect. It requires careful review, testing, and debugging by skilled humans. Poorly reviewed AI code can introduce subtle bugs, security vulnerabilities, or technical debt that slows things down later. The time saved writing might be partially offset by increased time spent verifying.
    • Potential for Productivity Gains: Despite the caveats, the potential for significant productivity gains is real. Automating tedious parts allows developers to focus on higher-level problems. This aligns with the "Slope of Enlightenment" – finding practical, valuable use cases.
  3. Analyze the "Take Over Coding" Claim (Replacement):

    • The Role of the Developer: Software development is far more than just translating requirements into code. It involves:
      • Understanding Ambiguity: Interpreting unclear or incomplete requirements, asking clarifying questions, understanding business context. AI struggles greatly with ambiguity.
      • High-Level Design & Architecture: Making strategic decisions about system structure, technology choices, scalability, maintainability trade-offs. This requires deep experience and foresight.
      • Complex Problem Solving: Devising novel algorithms or solutions for unique challenges that aren't just variations of existing patterns found in training data.
      • Debugging Complex Systems: Tracing intricate bugs across multiple components, understanding system-level interactions.
      • Collaboration & Communication: Working within a team, explaining technical concepts to non-technical stakeholders, mentoring.
      • Accountability: Ultimately, a human needs to be responsible for the software's functionality, security, and ethical implications.
    • "AI Agents": More autonomous "agents" (like the concept explored by Devin AI) aim to tackle larger chunks of the development process. However, they still face immense hurdles in reliability, handling novel situations, true understanding of requirements, and making nuanced architectural decisions. Early demos are impressive but often operate in controlled environments on specific types of tasks.
    • Likely Outcome: Rather than replacement, the more probable scenario in the foreseeable future is a significant evolution of the developer role. Developers will likely spend less time on routine coding and more time on:
      • Designing and architecting systems.
      • Defining problems and requirements clearly for AI tools (prompt engineering).
      • Reviewing, validating, and integrating AI-generated code.
      • Debugging and testing at a system level.
      • Focusing on the unique, complex, and creative aspects of software engineering.

My Response Summary:

"Claims that AI agents will completely 'take over' coding soon represent the peak hype. While AI tools are already providing significant augmentation and speedups for specific coding tasks – a real productivity gain – they currently lack the ability to handle the full scope of software engineering, which includes complex problem-solving, nuanced requirement understanding, high-level design, and human collaboration.

The more likely trajectory is that AI will continue to become an indispensable tool for developers, automating more routine tasks and significantly changing how developers work. This will likely lead to substantial productivity increases ('massively faster' in some areas is plausible) and shift the focus of human developers towards higher-level design, verification, and solving novel problems. Complete replacement remains a distant prospect, dependent on breakthroughs far beyond current AI capabilities."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment