The Smartest AI Models: IQ, Mensa Tests, and Human Context

AI models are constantly surprising us – but how smart are they, really?

A recent infographic from Visual Capitalist ranks 24 leading AI systems by their performance on the Mensa Norway IQ test, revealing that even the best AI can outperform the average human.

AI Intelligence, by the Numbers

Visual Capitalist’s analysis shows AI models scoring across categories:

  • “Highly intelligent” class (>130 IQ)
  • “Genius” level (>140 IQ) with the top performers
  • Models below 100 IQ still fall in average or above-average ranges

For context, the average adult human IQ is 100, with scores between 90–110 considered the norm.

Humans vs. Machines: A Real-World Anecdote

Imagine interviewing your colleague, who once aced her undergrad finals with flying colors – she might score around 120 IQ. She’s smart, quick-thinking, adaptable.

Now plug her into a Mensa Norway-style test. She does well but places below the top AI models.

That’s where the surprise comes in: these AI models answer complex reasoning puzzles in seconds, with more consistency than even the smartest human brains. They’re in that “genius” club – but wholly lacking human intuition, creativity, or emotion.

What This IQ Comparison Really Shows

InsightWhy It Matters
AI excels at structured reasoning testsBut real-world intelligence requires more: creativity, ethics, emotional understanding.
AI IQ is a performance metric – not characterModels are powerful tools, not sentient beings.
Human + AI = unbeatable comboMerging machine rigor with human intuition unlocks the best outcomes.

Caveats: Why IQ Isn’t Everything

  • These AI models are trained on test formats – they’re not “thinking” or “understanding” in a human sense.
  • IQ tests don’t measure emotional intelligence, empathy, or domain-specific creativity.
  • A “genius-level” AI might ace logic puzzles, but still struggle with open-ended tasks or novel situations.

Key Takeaway

AI models are achieving IQ scores that place them alongside the brightest humans – surpassing 140 on standardized Mensa-style tests . But while they shine at structured reasoning, they remain tools, not people.

The real power lies in partnering with them – combining human creativity, ethics, and context with machine precision. That’s where true innovation happens.

The Rise of Large Language Models (LLM)

In the rapidly evolving field of artificial intelligence (AI), Large Language Models (LLMs) have steadily become the cornerstone of numerous advancements. From chatbots to complex analytics, LLMs are redefining how we interact with technology. One of the most noteworthy recent developments is the release of Llama 3 405B, which aims to bridge the gap between closed-source and open-weight models in the LLM category.

Image credit: Maxime Labonne (https://www.linkedin.com/in/maxime-labonne/)

This blog aims to explore the current landscape of LLMs, comparing closed-source and open-weight models, and delve into the unique roles played by small language models. Additionally, we’ll touch on the varied use-cases and applications of these models, culminating in a reasoned conclusion about the merits and drawbacks of closed vs. open-weight models.

Recent Developments in LLMs

Llama 3 405B stands out as a significant breakthrough in the LLM space, especially in the context of open-weight models. With 405 billion parameters, Llama 3 delivers robust performance that rivals, and in some cases surpasses, leading closed-source models. The shift towards adequately open models like Llama 3 highlights a broader trend in AI towards transparency, collaboration, and reproducibility.

Major players that offer continuous evolution of LLMs are:

  • GPT-4 from OpenAI remains a leading closed-source model offering general-purpose applications with multi-modal capabilities
  • Llama 3 405B developed by Meta AI, reportedly matches or exceeds the performance of some closed-source models.
  • Similarly, we have Google PaLM 2 and Anthropic Claude 2, 3.5 models show strong performance in various tasks.

Closed-Source vs. Open-Weight Models

Closed-Source Models

Definition: Closed-source models are proprietary and usually not accessible for public scrutiny or modification. The company or organization behind the model keeps the underlying code and often the training data private.

Examples:

  • GPT-4 (OpenAI)
  • Claude 3.5 (Anthropic AI)

Pros:

  1. Performance: Often optimized to achieve peak performance through extensive resources and dedicated teams.
  2. Security: Better control over the model can yield heightened security and compliance with regulations.
  3. Support and Integration: Generally come with robust support options and seamless integration capabilities.

Cons:

  1. Cost: Typically expensive to use, often based on a subscription or pay-per-use model.
  2. Lack of Transparency: Limited insight into the model’s workings, which can be a barrier to trustworthiness.
  3. Dependency: Users become reliant on the provider for updates, fixes, and enhancements.
Open-Weight Models

Definition: Open-weight models, often referred to as open-source models, have their weights accessible to the public. This openness allows researchers and developers to understand, modify, and optimize the models as needed.

Examples:

  • Llama 3 405B
  • BERT
  • GPT-Neo and GPT-J (EleutherAI)

Pros:

  1. Transparency: Enhanced understanding and ability to audit the model.
  2. Cost Efficiency: Often free to use or available at a lower cost.
  3. Innovation: Community-driven improvements and customizations are common.

Cons:

  1. Resource Intensive: May require significant resources to implement and optimize effectively.
  2. Security Risks: More exposure to potential vulnerabilities.
  3. Lack of Support: May lack the direct support and resources of commercial models.

Small Language Models

While much attention is given to LLMs, small language models still play a crucial role, particularly when resources are constrained or specific, narrowly defined tasks are in focus.

Key Characteristics of Small Language Models:

  • Limited Parameters: Typically fewer parameters, making them lighter and faster.
  • Resource Efficient: Lower computational requirements, cost-effective.
  • Targeted Applications: Effective for specific use cases like dialogue systems, sentiment analysis, or keyword extraction.

Popular Small Language Models:

  • DistilBERT: A distilled version of BERT that is smaller and faster while retaining much of its performance
  • TinyBERT: Another compressed version of BERT, designed for edge devices
  • GPT-Neo: A family of open-source models of various sizes, offering a range of performance-efficiency trade-offs

Advantages of Small Language Models:

  • Reduced computational requirements
  • Faster inference times
  • Easier deployment on edge devices or resource-constrained environments
  • Lower carbon footprint

Conclusion: Closed vs. Open Source

The choice between closed-source and open-source LLMs depends on various factors, including the specific use case, available resources, and organizational priorities. Closed-source models often offer superior performance and ease of use, while open-source models provide greater flexibility, customization, and cost-efficiency.

As the LLM landscape continues to evolve, we can expect to see further convergence between closed-source and open-source models, as well as the emergence of specialized models for specific tasks.