Note: I originally published this essay on my Substack. Please follow me there and subscribe to receive my latest posts. Thank you!
First it was the Deep Blue
The first major splash AI made in the public arena was IBM’s Deep Blue matches against Kasparov. The supercomputer lost the first encounter in 1996, but it won the second in 1996. Deep Blue’s victory reverberated for quite sometime, catching peoples’ imagination. The only problem is that Deep Blue wasn’t a genuine AI system, but a powerful machine programmed to use a huge library of chess matches to compute the move with the highest probability of winning.
It took another twenty years until the first real AI software, AlphaGo Zero, created a public stir in 2016 by winning a five-match against the world Go champion, Lee Sedol. AlphaGo Zero achieved this success after months of machine learning with input from human games records. Its successor, AlphaGo Zero learned Go by self-training without using human games for three days before beating AlphaGo Lee version. Amazingly, it learned chess in less than eight hours of training to outperform AlphaGo Lee. That was 2017.
The DeepMind’s success stimulated a worldwide wave of AI research for applications in a variety of industries. However, after the initial excitement it all went rather quiet since then. AI lost that aura of extraordinariness undergoing an uneventful development, but somehow it became ubiquitous by stealth. If you are a software company and you don’t have an AI statement somewhere in your About Us section or financial reports, something must be wrong with you.
I am not ignoring or minimising the technological achievements that occurred over the years. One could spend a lot of ink only by listing those. They are numerous, but they happened inside the world of engineering and are understood by a limited circle of professionals. With a few exceptions, Tesla’s self-driving software is one example, these achievements occurred mostly behind the scenes. The public has been mostly unaware of the true scale of progress in AI.
Definition: Neuromorphic AI
In this article I loosely use the term neuromorphic AI to describe systems that learn and generate content in ways that resembles the way the neural networks operate in the human brain. The term is used in AI semiconductor industry to describe ICs that learn to distinguish features of a particular environment or context with algorithms that weigh input stimuli based on repetition, duration and intensity using spike neural networks similar to neurons. For simplicity I mostly use AI, but I mean neuromorphic AI when I refer to examples of the new generation.
Then 2022 arrived
The first AI product to get the public attention in 2022 was Dall-E by OpenAI. Initially released in January 2021, the deep learning model was upgraded and released in 2022 to Dall-E 2, a version that is capable of creating art of vastly improved quality. As an example, see below a portrait generated by Dall-E 2 inspired from Vermeer’s “The Girl with a Pearl Earring” original painting.


The artificial art is eerily human. Anyone looking at this artificial painting is likely to first experience a feeling of awe. It is beautiful, interesting, unexpected. If you take a closer look you see imperfections (the eyes, for example), but any work of art has them. This painting invites your imagination. The light and the bouncing reflections are natural, and so is the face, the combination of hair colour and the head cover. The decorative ribbon around the neck matches the dress. With an ease inaccessible to any human creators, the model can generate an unlimited number of versions of this portrait, and each one of them has a distinct artistic style, flair, and cultural imprint.
Another interesting aspect is depth and perspective. The AI painting is quite good because it recognises the balance between shadows and highlights. I am not sure how the training was designed, but the AI software makes the distinction between different styles of drawings (pencil, aquarelle, oil, photographic, paint spray, etc).
Compared with its predecessors, this AI model has been trained on a vastly more complex data set. It is capable of responding to virtually any queries written in natural language to generate art by request. The ability to differentiate subtle nuances is remarkable. The evolution from AlphaGo to All-E (these are two different companies, but as it happens in the art world, there is a transmission of ideas from one to another, which from a historical perspective makes the advance seem unified) happened in two distinct areas: the data set is infinitely much larger and unstructured, and the successful outcome is based on perception and artistic nuance.
DeepBlue used raw power to wrap the entire world of chess movements into one place. It’s victory was great, but clearly a result of software programming. The Girl with Pearl Earring alterations are not programmed. The model has gone through a massive archive of images to learn how to generate these alterations. The outcome is the result of learning and intelligence associations, not programming. Huge difference.
The Eye Test
Going back to the girl with a pearl earring, the most difficult part of that painting is drawing the eyes. Neuromorphic AI is not specifically programmed to paint eyes. How does it do it then? This is the most fascinating part of the new breed of AI: it learns about the world by association, signal strength, trial and error, and labelling, mirroring the way biological systems learn about their environment.
The eyes may be labelled as Alpha123 based on billions of observations that resulted in a conclusion that human face (labelled Beta 456 for the argument’s sake) have two eyes. The painting of an eye must be the result of a large number of interpolations of characteristics with statistical relevance. It may sound “artificial”, but it is not. We function the same way: our visual sensorial system collects separate aspects that flow along separate pathways: colour, shape, horizontal lines, vertical lines, brightness, etc.
When we remember someone we recall all these elements and put them together in a coherent picture that resembles that person. In fact if you close your eyes and try to recall the image of someone the memory provides a sketchy, unstable picture even if you know that person very well.
Try to focus on particular details and you see it is nearly impossible to get a well defined snapshot. This is why drawing from memory is so difficult. It requires the mind to concentrate on recalling fragments of memory and on drawing, switching focus back and forth in a series of brief moments of clarity, never certain, always approximating. Similarly, the girl’s eyes in the example above seem to be painted with careful fine brush strokes reproducing fleeting images recalled from human memory.
The “eye test” is a perceptive measure of the quality of artificial “thought”, but the eye is not necessarily the only part of the human body in which the AI can generate details that feel abnormal, creepy. I saw perfectly made pictures of beautiful people with one too many toes or a barely visible sixth finger.
I believe that over time AI will make fewer and fewer errors like these. At the same time people make, and have made since the beginning of time, errors like these, but often on purpose (Picasso?). That is a point that is important to keep in mind: whether the creator makes errors on purpose or not.
Intention
Overall the artificial portrait is pretty good, clearly better than the average person’s ability to paint a copy of the portrait (myself included). What’s missing here? You can get into a philosophical rabbit hole trying to answer this question, but I would just mention the lack of intent. The software takes a general instruction and after a few minutes it produces a decent (often surprisingly good) painting. There is no doubt, there is no hesitation that is usually caused by the desire to do better, to adjust, to make perfect. Maybe, an improved form of neuromorphic AI would be to make the drawing a longer term project that stretches over days, weeks or months, during which the AI would search for sources of inspiration, generate multiple versions, make changes, add new features until the final art is complete. This is a little bit like AlphaGo Zero who (ha,I said “who”!) did learn by playing against itself. Perhaps this would be nearly impossible without the involvement of human participants to provide input such as “the right eye is round and bigger than the left eye”, or “the head cover lacks texture details”, etc. I am inclined to believe this is the right direction of AI in general (pairing humans with AI entities) if we are to create a better society where humans perform actives of higher intellectual order.
The lack of intent leads to lack of emotion. This happens to humans too. The majority of us who tried to draw a portrait the act of doing it is mostly technical, attempting to mimic the original. Very few people are driven by the desire to express an emotion and even fewer capable of drawing a good, coherent picture. When you compare the two portraits, you notice the first has emotions embedded subtly in every detail, the eyes, the lips, the overall facial expression, while the AI generated portrait is rather lifeless, and if you stare at those eyes for a little longer, even a bit creepy. However, keep in mind that the creepiness is also a subjective reaction because we know who created the portrait. If your 10 year old daughter painted this you would be ecstatic.
The Dall-E AI lacks the desire, the intent to express a particular idea, but with more data and more computing power, that will come sooner or later. Maybe sooner than we think.
AI – Human Collaboration
What if AI would be able to interpret more descriptive, complex instructions to produce drawings in a story format? The quality improvement from Dall-E to Dall-E 2 shows how rapid this machine learning model evolves. Dall-E 3 will likely be able to have a substantially better ability to render micro details (eyes, texture, repeated reflections) and create art that is more distinct from the original inspiration. Human creators could “discuss” with AI finer aspects in a process that amplifies the power of human imagination. This opens the door to a much larger number of people capable of producing art in an infinite array of expressions. Imagine the flowing effect into architectural designs, retail environments, parks, cars, clothing. The sky is the limit.
An article published on Cnet reported the story of how Steve Coulson created a visual science fiction odyssey using Midjourney AI to generate images using text based instructions. The result was stunning.

The difference between this experiment and the creation of the girl portrait using Dall-E is the originality of the idea. The AI compiled the instructions to generate content without a visual reference. This is a step up.
These are the type of instructions used in this unusual collaboration project:
The Campfire team, for example, liked the rich effect produced by the style prompt “olive-green and sepia and teal-blue tritone print on watercolor paper,” so they used that one often to give images a painterly effect. For The Lesson, the phrase “futuristic underground bunker in the style of J.C. Leyendecker” yielded the perfect retro-futuristic postapocalyptic hideaway.
“We also used the phrase ‘Hitchcock Blonde’ to describe our heroine, and more often than not she came out looking like Grace Kelly,” Coulson said. That’s a fully recognizable Grace Kelly, without misplaced ears or a dog snout.
David Coulson has no artistic talent. He can put together a sci-fi story, has a vivid imagination, but he doesn’t have the skills to produce the visual story. Yet, together with the AI he created a beautiful and intriguing sci-fi book.
A strong evolving AI needs three elements: vast amounts of data, a good algorithmic learning core and continuous interaction with the real world. The first two are essential for quality training, but the third is critical for achieving human level of quality.
The tendency to think of AI as a series of software releases is a legacy of the old structured programming practice. True AI software is capable of learning from interactions with human users and self-adapt its internal model guided by the feedback it receives. The software upgrade is a continuous evolution.
The screens below reflect the depth of Midjourney’s database.

The portrait of the old woman is evoking inspiration from historical paintings and drawings related to the medieval ages. The young science teacher is clearly inspired from photos of Grace Kelly. The human imagined the characters to reflect the contrast between old and new, superstition and science, and the AI delivered. Amazing composition.

This is a mix of London old and contemporary snapshots and sci-fi comics art. Again, the human is the real creator here, the one who chose the major elements in this picture with maximum effect: a depressing view of the city of London terrorised by a huge reptilian beast.

Here, Egyptian symbolic writing with inserts of alien reptilian figures resulting in intriguing drawings that could be attributed to a good creative artist without any shadow of doubt. You have to give credit to the AI partner who drew this amazing shot.
These pictures were selected from a larger number of trials. This process is a high level machine learning for both the AI and the human. This experience adds to the value and power of the AI over time, but also improves the composition skill of the human collaborator. AI machines and humans are evolving together. This collaboration could lead to the emergence of new professional fields far beyond art, fields that split into unlimited specialisations and unparalleled productivity. This dispels the fear that only a few highly skilled professionals will benefit from advanced AI. What this new type of software does is to open opportunities for all those “amateurs” able to connect various fields of knowledge. This will lead to an explosion of creativity of Cambrian proportions.
On the science and engineering side a new set of professional occupations will arise in the area of natural language processing, with sub-specialisation into particular domains of knowledge, learning methodologies, etc. We already see it happening in medicine where AI is increasingly used to process images to identify potential anomalies and diseases.
chatGPT: THE Major Breakthrough
The most consequential AI software produced this year is chatGPT-3. It is the creation that is the closest to general AI ever produced. It is vastly more complex and capable than AlphaGo Zero because it can “understand” and respond to virtually any question posed by a human in form of free form text. The responses are so well presented it is virtually impossible for any unaware reader to guess that they are produced by an AI system.
This is a watershed moment in the AI evolution. I am convinced many years from now this will be marked as a key moment in our history when we entered an era of accelerated technological advance. It may sound a bit dramatic, but I believe this marks the dawn of a new society, one that fifty years from now will be vastly different from that we are in now.
What is so significantly different?
The most important difference is in the quality of the response to input. The accuracy of the reply mimics the human thought in the smallest of the detail. chatGPT “thinks” through associations trained through machine learning using a huge data set and advanced neuromorphic algorithms. The result is one of impeccable quality, even in the way it makes mistakes!
This is the key difference between the “old AI” and the new AI: the traditional AI is capable of navigating vast number of logical options until it finds the exact answer. It tries to imitate a human by copying fragments of text, ready to be served. Not much different from Google search engine, the traditional AI is incapable of subtlety.
chatGPT is a completely different proposition. Instead of navigating decision trees, it jumps along neural pathways making associations, assembling text by using rules learned by osmosis, the same way children do. Even the rules change as the software keeps learning.
Caveat Emptor
At its core, chatGPT is no different from Dall-E or Midjourney because the output is a learned approximation. In simpler cases chatGPT provides clear, correct answers, but sometimes it makes mistakes, the same way the drawing AI software does. Using an analogy, chatGPT occasionally fails the “eye test”, meaning the response is chimeric, a mixture of truth and fabrication. It looks odd. The phenomenon has been reported by many who tested chatGPT in a more systematic way.
A good example is the test conducted by Teresa Kubacka and published on Twitter. Being asked to find cited research on multiferroics, chatGPT assembled a response that seemed reliable, but after a closer inspection it turned out that it didn’t.
The strange aspect was that citations were not true. This indicates that chatGPT doesn’t follow a search tree, but responds based on knowledge built through machine learning that statistically validates the “correctness” of the memorised/learned knowledge, hence the chimeric answer:

When Teresa Kubacka asked about a concept that doesn’t exist, “what is a cycloidal inverted electromagnon?”, chatGPT provided an “expert” answer which was a made up story.
The overall experience is summed up in these two observations below:
I left the conversation with the intense feeling of uncanniness: I just experienced a parallel universe of plausibly sounding, non-existing phenomena, confidently supported by citations to non-existing research. Last time I felt this way when I attended a creationist lecture.
I also have a lot of worries of what it means to our societies. Scientists may be careful enough not to use such a tool or at least correct it on the fly, but even if you are an expert, no expert can know it all. We are all ignorants in most areas but a selected few.
The neuromorphic nature of the algorithm makes chatGPT exceptionally suitable for creativity projects, but not for rigorous statements. chatGPT may be able to help within narrow scientific subject areas, where formulas are easy to find, but not on the bleeding edge of science. Google is still needed, although clearly the traditional search engine is facing an existential threat.
Perhaps combining chatGPT with a traditional search engine is the best option for now. Microsoft, as a stakeholder in the chatGPT project (the algo has been trained on the Azure platform), could upgrade Bing to offer a hybrid search solution. I understand that Google has been working on a similar research project to chatGPT, so we should expect a competing product on that front.
Conclusion
I take with me these key points and ideas regarding the major advances of AI in 2022, as a starting point for further research and investment strategy development next year:
- AI software is noticeably getting closer to AGI level, exhibiting characteristics that mimic human thinking.
- Artistic content generated by AI have reached a level of quality well above that produced by the average human.
- chatGPT is capable of producing high quality text that is mostly indistinguishable from the content written by highly skilled individuals.
- Both image and text AI generators make “fuzzy” errors, sometimes obvious and horrible, sometimes nearly undetectable.
- A massive range of job opportunities will be created by neuromorphic AI. These opportunities will be available to a larger population, changing the dynamic of the business, education, and job market.
- The democratisation of creative endeavour will challenge the traditional educational practices. Knowledge will be cheap, but higher order skills involving innovation and collaboration with both humans and AI will be expensive. This is a boon for those who think big and act swiftly.
- Teaching will be one of the most disrupted professions.