Artificial Intelligence and Machine Learning demystified

Artificial intelligence (AI) is arguably the most ubiquitous yet least understood term on the internet in recent years. What exactly is it?

Background

You'd be forgiven for thinking artificial intelligence (AI) is a new, cutting-edge topic. However, it's actually been around for the best part of a century. Since the early 1950s there have been several explosions in AI interest, usually beginning with high hopes and lofty goals, culminating in disappointment when those goals cannot be met. Oftentimes this results in reduced funding or academic interest. The last two decades have seen an astronomical resurgence in AI that appears only to be gaining momentum.

With its meteoric rise there of course come known and unknown ramifications, such as ethical concerns in the use of personal data and replacing jobs. Google DeepMind's CEO has compared it to the crypto boom, with an intention to spend over $100 billion further developing the technology. AI is already used in email spam detection, disease diagnosis and research, automotive safety, fraud detection, personal assistants, smart home devices, customer service, and more.

AI / ML / DL

AI has become a bit of buzzword in the last few years as speculative investors fear missing out on the boom. Companies often integrate AI into their products, or at least claim to, in an effort to garner exposure and increase sales. So, what really is it?

AI is a rather large umbrella term open to some interpretation, but it can be best described as a branch of computer science whereby the goal is creating intelligent software and/or machines. It's often modelled on human intelligence to better interact with and solve human problems. This includes speech recognition and language translation through to computer vision applications.

We can break it down further into two core concepts, artificial general intelligence (AGI), sometimes referred to as 'strong AI', and artificial narrow intelligence (ANI), also known as 'weak AI'. A general intelligence is often thought as the golden prize; an award winning achievement should it be realised. That said, don't be fooled - a true AGI is considered a long way off. Science fiction often paints a clearer picture of AGI, wherein a machine can interact with new knowledge in an intelligent way. Current AI remains in the ANI stage and even large language models (LLMs) which may appear to be generally intelligent are in fact narrow intelligence implementations of AI. These are not truly intelligent in the sense that they can achieve consciousness or understanding of topics outside of their trained algorithms and sets of data.

So, what is machine learning?

Machine learning (ML) is a branch of AI focused on algorithms and learning through analysing large amounts of data, built upon statistical models with the ability to course correct themselves. This can be likened to teaching a child through exposure to patterns and their inference of said patterns without hard-coding a rule-base to be strictly followed.

ML algorithms are primarily split into two categories:

  • Supervised learning: achieved through labelled data wherein an algorithm learns from a subset of the data and tests itself against the rest. Performance is measured against the labels and adjusts for the next iteration. Best used for email filtering, credit scoring, and speech transcribing.
  • Unsupervised learning: achieved through unlabelled data wherein an algorithm attempts to identify patterns without any guidance. Best used for market segmentation, anomaly detection, and e-commerce product recommendation.

How about deep learning?

Deep learning (DL) is a further subset of ML in which the focus is on algorithms and architectures that mimic the structure of biological brains. Most of the groundbreaking work over the last 15 years has been done in/with DL.

It's built on multiple layers of neural networks comprising a single input layer, one or more hidden layers, and one output layer. OpenAI's Generative Pre-trained Transformer 3 (GPT-3) has 96 hidden layers, each with 12,288 neurons. That may sound like a lot, but this has been dwarfed by GPT-4 (though we don't have all of the details just yet). The types of neurons used and architectures thereof will depend on the algorithms used: multilayer perceptrons (MLPs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs) are most common.

You don't need to know all of the algorithms, but it's important to know that broad complex problems can be solved by going big like GPTs 1 through 4 or by combining multiple smaller models together through what's called an ensemble model.

How can we help?

At GeoTech, we have experience building both ML and DL models to address real world problems.

Predictive modelling

We've designed and built geospatial predictive models to assess cultural resource management, natural catastrophe risk, and suggested forms of land-use. If your problem involves spatial resource allocation, handling complex networks, or understanding how climate change might impact your business, our predictive modelling could help.

Machine vision

We've designed and built machine vision models for satellite and aerial imagery to identify and classify features. If your problem involves identifying large assets or assessing changes over time, machine vision could help.

Remote sensing signal classification and regression

We've designed and built remote sensing signal models for agricultural land use and deforestation. If your problem involves environmental change or assessing land productivity, remote sensing signal modelling could help.

We are always happy to have a chat through any problems you have. Please get in touch with us or reach out on LinkedIn if you'd like to learn more.