Page content

Back to list

The main thing is prompting: giving AI the right instructions

There was one thing the speakers Bilal Zafar and Dr. Patrick Ayad agreed upon on Thursday: regarding AI and its uses, they both emphasised how important it was to formulate precise requirements.

In his keynote speech Bilal Zafar, founder of two successful internet companies, visionary and long-time enthusiastic AI user, talked about “AI as the biggest opportunity humanity has ever had“ and encouraged the audience to freely make use of it. On the other hand Patrick Ayad, global managing partner, Sectors at Hogan Lovells, and worldwide expert on international contracts, global purchasing & sales law and regulation issues, dwelt on its implementation and the legal framework for driverless vehicles.

Giving precise instructions

The focus of both lectures was on “prompting”, i.e. giving the precise instructions which an AI model requires to fulfil specific tasks. Zafar presented the audience with detailed Chat GPT instructions for formulating an email to a local public order office. He had been speeding in a noise-restricted zone. “I drive a Tesla which makes no noise,” he said. AI formulated the sentences in a more professional manner. “Owing to the fact that you own an EV, proceedings against you have been terminated“, was the city of Freiburg’s reply.

“When prompting, it is important to know exactly what you want and don’t want“, Zafar said. He also explained how to use AI to explain complex parking instructions using a single sentence and how in Santa Monica, California, the AI startup Hayden AI has equipped buses with special cameras which record bus lane activities. That way, vehicles contravening traffic regulations by travelling or parking on lanes reserved for buses can be traced. A report is also generated automatically which can be used to issue a traffic ticket almost in real time. “In ten years’ time there will be AI that can do everything humans do today“, Zafar said.

Image of Bilal Zafar

Image of Bilal Zafar

Taking human error into account

Patrick Ayad hauled these possibilities back into the realm of the real-life legal framework. In his lecture on ‘Self-driving buses – the legal framework governing driverless vehicles in Germany’, he talked about testing approval, the legal situation at national, European and international level and the highway code, which requires drivers to look over their shoulder before turning off for example. “These are all provisions a driverless vehicle must be instructed to observe. Some of them are not feasible. How can a driverless vehicle look over its shoulder? We need to think about that”, he said.

Furthermore, human error had to be taken into account. He asked how trade visitors reacted as pedestrians on a zebra crossing. Most people briefly hesitate, interpret the signs coming from the vehicle approaching them and establish eye contact with the driver. What effect would a driverless vehicle have on a pedestrian’s actions? Could one tell a vehicle to flash a smiling emoji as soon as it braked?

“People are bad, not AI“

Currently 96 per cent of accidents are caused by human error, said Patrick Ayad. But what happens when the technology fails? Will lengthy proceedings ensue arguing who is to blame among the system providers? There is much to be considered, tested and discussed. Bilal Zafar said “we need to be brave rather than bureaucratic to drive AI forward. Nothing will come of it if we keep sorting paper files.”

Zafar described AI as “the biggest opportunity humanity has ever had“. Asked about the possible dangers of AI models, whether fake photographs and texts and driverless vehicles were a high potential risk, he said “AI is instructed by human data. We humans insult each other. We wage wars. People are bad, not AI. Perhaps we must become better people first for AI to be used to our advantage.

Stay ahead of the curve in the world of mobility!

Subscribe to our newsletter today and be the first to receive updates on the highly anticipated Mobility Trade Show in Berlin.