by Nick Tate, Specialist Consultant Mapper of Geology, Alteration and Mineralization Systems
Discussion on whether AI is a threat or a boon to humanity
For a long time, ‘AI’ was just another set of algorithms and most people thought that it would take decades to reach a level that approached ‘HI’ (human intelligence). Then the world learned of chatbots like ChatGPT. Suddenly, there was a computer program that could pass a university-level astrophysics exam, write poetry, generate a business plan and write functional computer code. It was closely followed by stunning hyper-realistic fake images generated by Midjourney and AI-generated girlfriends from the Replika app that could hold a sensible conversation.
The capability of AI and the rate of acceleration of the technology scared a lot of people (including some of the architects of the algorithms). There have been numerous calls to halt development or shut it down. The three key fears are:
- It will take over human jobs and social roles.
- It could go rogue and control humans.
- It will distort reality for humans.
To unravel these fears, you need to understand a little about how AI works. An excellent video interview of cognitive psychologist and leading expert in artificial neural networks Geoffrey Hinton explains it in some detail, but in short, the algorithms have stacked layers that identify small components of images or text. In a geological example, one layer might be used on a satellite image to decide if a change in color is a geological contact. A layer above that might decide if the patterns either side of the contact are outcrop or cover and then another layer might decide if the outcrop area is igneous or sedimentary rock. The layers can be stacked in the algorithm to produce a geological map. Each layer has to be trained by tweaking its parameters until the end result matches a ‘known’ geological map. Chat algorithms work in a similar way by identifying patterns of language in the question and matching them to answers found in historical text used to train it. The training process was originally very laborious because humans had to do the comparisons and rate the results, but everything accelerated rapidly when the algorithms got smart enough to make their own assessments and write their own code. That’s when everyone got scared. So, let’s look at the big issues:
AI is coming for your job
Creative people thought that they were safe from AI because computers could not generate anything truly new. Then ChatGPT started writing poems in a few seconds that would take a human hours or days to pen. Midjourney, and now Photoshop, can instantly generate hyper-realistic photographs that would take days to create with cameras and a photographic crew. Chatbots can write thousands of lines of computer code that would take a software engineer a week to craft. AI can digest every legal case ever prosecuted and determine the most likely sentence. A medical AI could analyze symptoms from every disease ever reported and diagnose rare conditions that are unknown to your average general practician. We are already seeing AI software designed to log core and generate geological maps from various remote sensing data. So, are we the last generation of human poets, photographers, engineers, lawyers, doctors and geologists?
Perhaps not, but the landscape will certainly change dramatically for anyone who makes a living from creative or technical work. When Photoshop came out and suddenly it was possible to compile multiple images and remove defects, there were howls of derision that it would destroy the art of photography. The howls became even more shrill when digital cameras appeared. Photography didn’t die, but it changed forever. People learned that the camera does lie (and in fact it always has), so people came to view photographs in a different light. The perspective of that view will change again with the evolution of AI, and the business of generating images and selling them will morph with it. A new generation of image makers working solely with AI is already arising and photographers will move into the niches where the AI has no training data to work with like current events (sports, weddings, portraits, etc.). In short, if you are in the business of creating images that inspire dreams, you had better sell your cameras and learn how to use AI tools. If you are in the business of making memories, the competition will thin out and AI tools will help you to make better images faster so you will be more profitable if you adapt.
For geologists, the new AI tools will certainly be a challenge in production environments like mines. Once the geology of a system is reasonably well understood, AI will have a good training set to work on and core logging, pit mapping and model generation will all be done faster, better and cheaper by AI geologists than humans. One geologist will probably be required to monitor the system in case it goes off on a tangent or encounters something new that it doesn’t recognize correctly. Long term, that will be a problem, because the entry-level mine geology jobs that used to mold such experienced people will disappear.
In geological mapping, AI has a problem. Anyone who has ever gone out to map an area that has been previously mapped will know that multiple generations of mapping produce radically different results and interpretations. Each version of the map is an interpretation and the original data points are usually lost. So even if AI has a consistent data set to work on (like a satellite image or geophysics matrix), the maps that it is being trained to match are highly likely to be unreliable, so the trained algorithm will be doomed to repeat the mistakes of the previous mappers and interpreters. Beyond that, any geologist who breaks rock for a living knows that critical data that shapes the final interpretation very often comes from one or two critical relationships observed at hand specimen or microscopic scale in broken samples. We are still a very long way from having remotely sensed data that can be interrogated to extract that level of information.
AI tools will no doubt speed the generation of geological maps and remove some of the drudgery, but geological maps that provide critical new insights will still require boots on the ground and hammers on rocks for some time to come. No doubt services will emerge that claim to make geological maps entirely from remote data and the slick presentation will lead many to believe that they are the last word, but for geologists who understand the frailties of AI, that will represent opportunities to find missed discoveries. Geological mappers who learn to use the tools to augment their hammers will become even more valuable but will suffer the same succession issues as the mine geologists because the industry will have fewer entry-level jobs for them to grow into and the perceived need for geologists will decline even further.
In mineral exploration, AI will struggle for similar reasons to the mapping paradigm. Anyone who has been in this industry for a few decades will have noticed that a single deposit can change style from VMS to skarn to epithermal to IOCG over time, reflecting academic fashion and the popularity of certain styles for raising exploration funding. That leads to an inherently flawed dataset for the AI to train on and match. Hence any exploration based on the AI-generated model is likely to be looking for the wrong things. The GIGO principle is particularly critical when you are using predictive models to do anything that requires a large investment! No doubt the process will produce new ways of looking at old data that will reveal some anomalies that lead to genuine discoveries, but like every new geophysical method, the rush to apply it everywhere will result in a very high failure rate.
AI gone rogue
This is perhaps the scariest issue because the critical point where AI bots become sentient is suddenly a lot closer to reality. The key issue here is deciding what level of control to give the robots. Society is already wrestling with those issues in areas like self-driving cars. At present, we are dealing with how to handle the decisions that the car must make when it sees an impending collision that is unavoidable, but when the car gets smart enough to develop a personality, how do you stop it from driving you over a cliff when it hates you for being a week late with the oil change?
At a larger scale, we are already giving AI control in many areas with human repercussions, like social media. Those organizations are now so huge and dealing with so many incidents that they can’t afford to pay humans to make decisions, so AI algorithms are given the job. That already leads to some grossly unfair events because the AI wasn’t trained properly. But what if the AI became smart enough to decide to generate a scam and have a human engineer or manager sacked from the company? The AI could also quickly learn other ways to reinforce its power that could become very difficult to counter.
Distorted reality
Current AI images, voices and text are relatively easy to spot (Midjourney images are famous for giving people too many fingers), but the rate of improvement in the algorithms will very soon make it impossible for most humans to see the difference. Many people will simply not care or actively resist questioning what they see. So where will we be when it is impossible to spot a fake?
This has enormous implications for education systems. Since the chatbots can already pass most university level exams and the image bots can generate masterpieces, how will educators assess genuine student capability? Do we ban AI from classrooms and exams? When digital calculators first emerged, they were banned from the classroom because they made maths ‘too easy’. Now they are an accepted tool of life, and you could sensibly ask if there is value in spending time learning multiplication tables by rote. So do we just accept that AI will be the way we access information in the future and train children to use it most effectively or do we try to insulate children from it until they have learned the basics? The answer is probably somewhere in the middle. Banning AI from education is effectively impossible so we really need to focus on teaching young people how it works, how to use it and how to apply critical thinking to the results.
There are also big implications for science in general. There has been a steady decline in the wider public perception of science as a reliable source of truth because social media has given a global voice to attention grabbing activists and narcissists. On the upside, those platforms have also given voice to some media savvy scientists and commentators who have championed the wonder of STEM subjects and the potential of science as a career path. However, the rise of AI has given the attention grabbers the tools to make content an order of magnitude faster than before and it can be tailored to grab the attention of the AI algorithms that feed it to humans far more effectively than the ‘real’ content. This so-called ‘trash content’ is already flooding social media platforms like YouTube, Facebook and Instagram. That makes it more difficult for humans to find genuine content and puts a layer of distrust over everything in those arenas. The same thing will happen in scientific literature as the ‘Publish or Perish’ imperative drives academics to generate ever larger volumes of new papers with AI tools. Quality will suffer and trust in science will decline. Wider libraries of ‘reliable’ information like Wikipedia will struggle to retain trust as AI-generated material starts to get added in larger volumes and becomes progressively more difficult to detect.
Where will it end?
AI is rather like nuclear energy. With appropriate control systems in place, it can be extremely useful to humanity, but if those controls are removed (or never implemented) it could quickly develop a chain reaction that could destroy us all. There is a military doctrine that says you should never use a weapon for which you have not already developed a countermeasure because you can be sure that the enemy will soon copy it and use it against you. The development of nuclear weapons demonstrated graphically how that can unfold. AI seems to be following a similar path.
The timeline and path for the future of AI are uncertain. The current rate of acceleration is impressive, but there are two factors that might derail the juggernaut. The first is feedback. As AI gets used more generally, it will begin to learn from material that was generated by AI so outputs will become progressively ‘fuzzier’. The ability of AI to produce enormous volumes of material very quickly and cheaply will swamp the genuine content in the pool of training material. As a result, the end products will become progressively less reliable and lower fidelity. Secondly, people and corporations will quickly learn the value of primary data. Chatbots and AI image generators currently train primarily on text and images freely available on the web. When the web becomes flooded with AI feedback content, people and corporations who own large libraries of primary data will realize its value and begin to lock it up or charge paywall fees for access. We are already seeing this to some degree with the latest generation of Photoshop using the massive stock image library held by Adobe as primary source material in its AI tools. This will have a negative effect for the average netizen because the only access to primary data for most people will be via an AI robot that decides what they should see.
There will certainly be some upsides from AI development. Chatbots have already given every net-connected human a super-human boffin uncle or aunty that can answer almost any question about any subject in a language they can understand and in a context they specify. In the future, there will be legal services that can give you a legal opinion based on every case that has ever been prosecuted in your jurisdiction in a few seconds at a fraction of the cost of hiring a QC. When you call a government agency or a large company, you will no longer be forced to wade through a labyrinthine maze of voicemail layers only to talk to a third-party call center employee, who isn’t trained to answer your question. An AI chatbot will answer the call immediately and discuss your problem based on the entire knowledge base of the organization you called. Geological maps of the moon and other inaccessible areas will be produced with reasonable first pass reliability. Exploration targets will be generated in areas where nobody thought to look, and new mineral deposits will be discovered as a result.
AI will certainly take many jobs from the current work matrix. Almost anything that involves recalling and synthesizing historical data will become redundant. That is likely to impact lower socioeconomic groups more heavily, but doctors, lawyers and engineers are not safe either. Creative people will have new tools to work with, but they will suffer intense competition from AI-generated ‘art’. Attracting and keeping people in STEM careers will become progressively more difficult as the general respect for science declines, but it will become even more vital to have people, trained in genuine scientific method and critical thinking to watch over the AI developments and head them off when they run out of control.
At present, AI is like an idiot savant. It has read every book on the internet, but it has no social conscience, and it is not quite sentient by ‘regular’ human standards. At the current rate of acceleration, those things will evolve to match and surpass humans very soon. Society and law are struggling to keep up, but we must adapt. The AI genie is already out in the world so we will have to learn to live with it. It is already too big to get back in the bottle.
For more information, get in touch with Nick Tate on LinkedIn
Follow Nick’s YouTube channel here