Artificial Intelligence (AI): A Narrative as a Lexicon
How are chatbots like ChatGPT created? What makes them intelligent? And what role does humanity play in this process? Patrick Ratheiser and Christian Weber—founders of the AI company Leftshift One—take you into the world of artificial intelligence.
- 17. October. 2024

Karin Schnedlitz
Content Managerin

A whiteboard and a green marker. These two profoundly analog tools are all Christian Weber needs to explain the essence of artificial intelligence (AI). The co-founder of the AI-focused Austrian startup Leftshift One begins by drawing circles on the board. The first term he writes is “AI.”
AI [Artificial intelligence] While the term “AI” is commonly used in the English-speaking world and among developers, the abbreviation “KI” for KĂĽnstliche Intelligenz (artificial intelligence) has become established in German. “AI in the form of rule-based expert systems is several decades old,” says Weber. However, AI became technologically “exciting” only with its subcategory, “Machine Learning.”
ML [Machine Learning] Machine learning enables computer systems to learn independently from large datasets and improve their performance. Pattern recognition and mathematical methods are central components. For machine learning to work, the algorithm must first be trained. This can occur with minimal human effort and unfiltered input data [Unsupervised Learning] or with meticulously pre-processed example data [Supervised Learning]. After the learning process with the known datasets, the model is applied to unknown data. To achieve quality results, development loops occur repeatedly, during which a human evaluates the AI’s outcomes.
"AI in the form of rule-based expert systems is several decades old."
Christian Weber, Co-GrĂĽnder Leftshift One
DL [Deep Learning] A subset of “Machine Learning” is called “Deep Learning.” This technology became possible only thanks to state-of-the-art, high-performance graphics cards, as Christian Weber explains. It works significantly better with unstructured datasets than “classical” machine learning. Deep Learning identifies structures on its own. The prerequisite for this is large amounts of data, which in turn require substantial computing power. Training can take months. Only then are good predictions and decisions possible. While data preparation in classical ML is predominantly in human hands, this is not the case with Deep Learning. Today, Deep Learning is primarily used for complex image and speech recognition.
Generative AI [AI That Creates] This popular term refers to applications based on AI that create things, whether text, audio, or video. The chatbot ChatGPT falls into this category, as do Midjourney and Stable Diffusion, programs for creating images.
LLM [Large Language Model] The LLM, or large language model, is usually based on Deep Learning technology. It learns from large datasets in the background and ultimately can understand content. An LLM can write summaries or make predictions. A particularly large and resource-intensive model is GPT-4, the technological backbone of ChatGPT. Google uses PaLM, while Meta (formerly Facebook) employs LLaMA.
Foundation Model [The Base] At the core of a powerful AI system lies a “foundation model,” as Christian Weber, the current CTO at Leftshift One, explains. The startup itself relies on freely accessible open-source models and combines them [Composite AI]. Language models, for example, draw on millions, if not billions, of input texts in the background. This enables them to develop a sense of semantics and statistically understand which words are most likely to follow one another.
"Human communication is the gold standard in AI."
Patrick Ratheiser, CEO Leftshift One
A ChatGPT for Businesses
In analogy to the popular US chatbot ChatGPT, Leftshift One develops tailored chatbots for businesses. They promise streamlined models, data protection, and transparent answers with source references. Learn more about it here.