About me. That isn't my name but it is indeed where I live:

My photo
Brighton, East Sussex, United Kingdom
Don't worry, this isn't a lifestyle blog,

Monday, October 14, 2024

Outthought by the Unthinking - (AI viewed as just another over large IT project)

 



Picture of a robot sweeping up humans into a bin. AI generated, for reasons of irony.


Technologically speaking, we live in interesting times. Never before has innovation moved at such a pace that the next amazing thing is being dangled before our greedy eyes before the last one has been perfected, or even finished. I’m old and grizzled enough to remember a time before easy access to computing was widespread, when TV was a cathode ray three channels and telephones lived on a table in the hall, tethered by their curly wires and exciting us with the mystery of who might be calling when they – yes -rang. My degree coursework was handwritten and physically handed to granite faced lecturers in ties. We were poorly connected, but we were happy.

I’ve watched computing evolve from when programmers were gods, able to do it from scratch and squeeze out such tightly written masterpieces as a graphical 3D monster maze in 16K, or even a passable “1K Chess” for the Sinclair ZX81 to today, where it seems normalised for bloated, meandering projects that could never work to expand to the multi-million (or sometimes billion) pound stage before being abandoned.

The list of these abandoned projects is astonishing, from the TAURUS electronic trading platform on the London Stock Exchange (£75m) and the U.S. FAA automated air traffic control system ($3-6b) cancelled in the 1990s to the massive UK NHS Connecting for Health, a project largely scrapped after a £12b investment (largely the same cost as an aircraft carrier).

I’ve come to realise that there are several discrete stages to a tech project destined to crash and burn. First is the excitement of the Big Idea and the tendering stage where everything is possible and seems straightforward, because at that stage it’s all made from dreams and candyfloss and seen from the kind of distance where details are invisible. Then things actually get going and that’s when unforeseen difficulties begin to pile up and everyone’s magic spectacles begin to cloud over with grey fog. A big red flag at this point is if the people you were initially dealing with, who knew everything inside out suddenly aren’t there anymore. These are the people who know enough and were smart enough to jump ship while the project is still a positive on the curriculum vitae.

When things really start to get difficult and you could really do with all those people who knew what was what, is when the hitherto unforeseen terrain, not visible on the paper strategy maps, on which our Napoleons formed their plans,  starts to look impassable At this point though, the organisation is too far invested to shut down or change direction without the instigators losing serious face at the time and money already wasted. The sunk cost fallacy says this is the time, when everyone realises that disaster was designed in right from the start, to cash in the remaining chips and head for the door Unfortunately,  no one wants to be the one to raise that hand and so there everyone is, left trying to build a road bridge to Mars, one day at a time.

This is the point at which the Users get blamed from above and below, for not foreseeing the things that the project managers were paid to foresee and for not clearly defining everything they could have had no idea at the time that they wanted. The solutions at this point while they’re are on the back foot tend to involve the users needing to change every aspect of the way they work to fit how the software model operates. This is utterly ridiculous (you wouldn’t get an architect insisting someone sticks an extra wall on a house because they ballsed up their drawing), and yet it happens.

When this is happening and you’re not the one with the power to stop the madness, know that you’ve passed the point where you could have escaped from this awful maelstrom. Now there is nothing to do but ride it out from day to day, hour to hour until either calmer weather prevails or the planks of the ship fly apart.

If compromise can be reached and what happens in the virtual world can be made to represent all the unforeseen granularity of the real world (often due to an unusually talented person/people being taken on to replace the original team who knew what was going on, on the project side or someone with the clout and energy to knock heads together taking an interest on the User side) then all may be well, even if at much greater time, effort and expense than originally planned.

If no compromises can be reached then time, money and resources will be poured in until none are left, and the project is abandoned. The project can never succeed at this point because the world will have moved on, rendering the original parameters meaningless and impossible to achieve because those with the level of know how needed are long gone.

I was thinking about this while trying to avoid being involved in our latest major IT project at work, but realised it also applies on a larger scale to our present AI “revolution” which has all the nascent hallmarks of a project with “this can never work” written right through it from end to sticky end, like Brighton Rock. We want our new intelligence to give us truth, but it has no knowledge of the concept of objective truth and no capability of understanding, being only a statistical engine tasked with putting what is statistically the most plausible thing next, given what has gone before, in the light of what it has been fed in its “learning process”. A couple of decades ago, we would have spotted this, and generative AI would have been a short lived curio on the path to something more useful.

Tragically, generative AI meets our society at a point where we actually value being told what might be described as “any old bollocks” that either pleases us or reinforces our prejudices at least as highly as actual verifiable fact. We are being ruined by a little misplaced squirt of dopamine and an erroneous feeling of “rightness” accompanying a plausible untruth that fits events, sometimes better than the actual uncomfortable facts, a feeling that has always been so useful to the stage magician, the confidence trickster and the religious.

I think we’re reaching the point in the proceedings where all the people who started this will be cashing in and moving on to the next bright shiny thing as we, the users, will be asked to restructure our way of looking at the world in order to fit the software. We are already being encouraged to debate “what is truth anyway?” and to accept that one person’s “truth” is just as valid as another. People are sending in AI written articles and stories to magazines. Wikipedia is being overwhelmed with updates that simply aren’t true but are designed by a machine with endless patience which knows statistically what we will find most plausible and convincing to read.

It's madness to hand over the records and running of our society to this technology when we’re still at the stage where you’d be raving mad even to send a voice text to a work colleague or your mother without proof reading it first. If we don’t abandon this project right now for the disaster it is plainly going to be (which we won’t because our “bosses” in this have too much invested) then we will enter the maelstrom where all we will do is pour in our resources (and there is, as we’re finding out, a real, global cost to all this computing power), to try to get it to work, until we eventually abandon the lot at vast expense and accept that we can no longer rely on anything recorded past about 1983.



                                                                                                    AA