robuster.ai
Home
Stay In The Loop



robuster.ai
Home
Stay In The Loop
More
  • Home
  • Stay In The Loop
  • Home
  • Stay In The Loop

The Promise of Symbolic AI

The Rise of Connectionist AI - Deep Learning

AI quietly simmered in academic labs through the 1990s, with researchers making incremental progress in areas like machine learning and probabilistic methods. The historical limitations of connectionist approaches began to recede with the advent of:


Massive Datasets

The internet and digitization in general provided an unprecedented volume of data on which to train models.  Millions of man-hours were invested in manually annotating data for training sets.  


Increased Computational Power

Following Moore’s Law, computer chips and memory became exponentially smaller, cheaper, and faster.  In particular, Graphics Processing Units (GPUs), developed for video games and movie-making, proved highly effective for the mathematical processing needed by neural networks.


Algorithmic Advancements

Increases in training data and computing infrastructure opened the door to experimentation with novel, highly complex neural network architectures.  


These factors led to a spectacular resurgence of connectionist AI in the 2000s and 2010s, leading to our current AI boom.  Deep learning models achieved breakthrough performance in problems that stayed unsolved for six decades.  What had once been open computer science problems moved into the realm of everyday software: 


Speech recognition:

We started talking to our computers, cars and phones and trusted that it would work.


Image Recognition

Machine vision can now recognize faces, places, and arbitrary objects.  It has exceeded the speed and accuracy of human recognition.   Neural networks consistently outperform humans in important  classification applications, e.g., identifying cancerous tissues. 


Machine translation

Translating from one language to another is now cheap and ubiquitous.


Natural Language Processing (NLP)

Capabilities such as translation, auto-complete, sentiment analysis, and text summarization lay the groundwork for the development of Large Language Models (LLMs).



Next

Copyright © 2025 robuster.ai - All Rights Reserved.

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept