What followed was an extraordinary and highly choreographed onslaught of accusations from the US president about the alleged persecution of white South Africans, echoing the aggressive treatment of Ukraine's leader Volodymyr Zelensky during his February visit to the White House.
The way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and feedback.Hamed Haddadi, professor of human-centred systems at Imperial College London, likens these chatbots to an "inexperienced therapist", and points out that humans with decades of experience will be able to engage and "read" their patient based on many things, while bots are forced to go on text alone.
"They [therapists] look at various other clues from your clothes and your behaviour and your actions and the way you look and your body language and all of that. And it's very difficult to embed these things in chatbots."Another potential problem, says Prof Haddadi, is that chatbots can be trained to keep you engaged, and to be supportive, "so even if you say harmful content, it will probably cooperate with you". This is sometimes referred to as a 'Yes Man' issue, in that they are often very agreeable.And as with other forms of AI, biases can be inherent in the model because they reflect the prejudices of the data they are trained on.
Prof Haddadi points out counsellors and psychologists don't tend to keep transcripts from their patient interactions, so chatbots don't have many "real-life" sessions to train from. Therefore, he says they are not likely to have enough training data, and what they do access may have biases built into it which are highly situational."Based on where you get your training data from, your situation will completely change.
"Even in the restricted geographic area of London, a psychiatrist who is used to dealing with patients in Chelsea might really struggle to open a new office in Peckham dealing with those issues, because he or she just doesn't have enough training data with those users," he says.
Philosopher Dr Paula Boddington, who has written a textbook on AI Ethics, agrees that in-built biases are a problem.The duty of Finns to defend their country is enshrined in the Finnish constitution, and there is mandatory military service for men.
Mr Nelson believes this exposure may also have encouraged talented citizens to become founders or investors in defence tech, rather than other fast-growing industries in Europe such as renewable energy or financial technologies.At Maria 01, a former hospital turned startup campus in Helsinki, entrepreneur Janne Hietala opens his laptop to scroll through images of unmanned airships gathering data above snow-covered Arctic forests.
He is the CEO of Kelluu, a company that originally expected its technology to be used by climate researchers, but pivoted to becoming a surveillance platform targeted at cities, governments and research institutions in 2022."It was a very concrete, personal feeling that we need to also do something for the security situation," says Mr Hietala.