Today, Martin Crowley, Mayuk Sengupta and Sushmita Hegde presented the seminar that I had been looking forward to for some time as it was surrounding the topic of Artificial Intelligence (AI). A subject that it hard to avoid if you follow any tech related news source in the past couple of years. I had also caught a glimpse of the guys preparing for the workshop during the past week and the sight of the Alexa speaker definitely sparked my interest.
The groups presentation included a number of areas relating to the topic of AI, how it is used today and how it could effect society as a whole in the future. The topics covered were embodied and disembodied AI, the effect of AI on human endeavor and AI in the field of art and design. Many of the examples shown were ones which I was quite familiar with but having them all presented in front of me made me truly realise how much this technology has grown in recent years, as growth that is unlikely to slow down anytime soon.
For the workshop, we participated in four different activities all surrounding the area AI and how it is implemented in our current technologies. The first activity required us to split up into three teams and have a go at Google Draw, a neural network that is meant to learn and recognise specific objects based upon a collection of peoples doodles. You are given six objects to draw and the aim is to get the machine to recognise the object you are drawing within twenty seconds. It was a really fun activity and the overly sensitive mouse combined with the machine spitting out crazy guesses only made it better. I was also surprised as to how many we got correct, I think I may have been the only one to have failed with one of my drawings, apologies to Google Draw robot but I still blame the mouse.
I happily played the guinea pig role for the second activity, in which I was blindfolded and had to hover a phone over a number of objections. Using an mobile application, the idea was to find a remote control via image recognition and a talkback feature, this was done quite easily but the accuracy was not great overall as the AI seemed to believe there were number of light switches on the table.

We then had to make a chatbot for the second activity through the use of the program Pandorabots. For this, we had to split into groups of two and me and Emma decided to make a chatbot by the name of ’Shady Bot’, basing the majority of the answers on the sassy character Titus Andromidon of the tv show Unbreakable Kimmy Schmidt. This required a little bit of hacking away at code to set up but I think we did quite a good job in the end. Saying I’m proud of the bot might be a little bit of an overstatement though.

Finally, the time I had been waiting for. For the final activity, we got to participate in a really fun activity with the Amazon Alexa speaker. To begin, we surrounded Alexa as if we were around a campfire. Each person was then given clues with the idea of asking Alexa questions based upon those clues. This would then result in us discovering a connection with someone else in the group and pairing up with them. Unfortunately but to the amusement of everyone on the room, Alexa didn’t really want to play nice and didn’t respond properly to many of our queries, she was being sassier than Shady Bot and it didn’t exactly scream ‘intelligence’.

However, we prevailed in the end and I was paired up with Martin for the final task of designing something useful for Alexa. After asking her a few questions, we discovered that she likes coffee. “But how does she drink coffee?” we asked, which lead us to making Alexa a lovely pair of hands and a mouth, the coffee would be extra. It was great to see the accessories that everyone made for Alexa after it all, including a stylish hat, a wig, a dress and a car. She was spoiled!











