Heads-Up: AI Cameras to Enforce Traffic Laws in the UK

Hello friends, Happy Tuesday!

It's thrilling to now how much technology is making our lives better on daily basis. Artificial intelligence (AI) is one of such technology that keeps evolving with creative solutions for man's problems despite the backlashes it has received. I have shared my best AI discoveries in the last few weeks and each time, I have something worthy to inspire my little tech friends.

Today, I discovered an article talking about how authorities in the UK are utilizing AI tools to enforce traffic laws. I didn't expect to read that commuters in the UK fail to obey traffic laws. I thought it only happens in Nigeria and other African countries. However, research statistics proves it to be true. Last year, in Devon and Cornwall, an AI camera system revealed 117 instances of mobile phone usage and 180 seat belt violations in just 72 hours.

It is cool to know that an AI-based solution can bring traffic defaulters to book for a more sane traffic law compliant society.


Link

Heads-Up AI Cameras

The “Heads Up” cameras are developed by Acusensus, an Australian company. The camera uses machine learning algorithms to analyze images of passing vehicles in order to identify driving offenses at a high scale and precision which is not possible without AI automation. The usage of the AI cameras will start on September 3rd in the Greater Manchester in the UK.

In specific terms, the UK police forces will utilize AI cameras to catch more UK drivers using phones and not wearing seatbelts, among other driving offenses. These systems use high-definition imaging to capture clear footage, which is then analyzed in real-time by artificial intelligence models designed to detect specific behaviours. Vehicles passing under these cameras will be monitored, and any detected offenses will be flagged for further review by human operators before penalties are issued.

This is How the Heads-Up AI Cameras Works...

  • The AI Cameras capture two images of each passing vehicle at a shallow-angle shot to check for seatbelt compliance and phone use and a wider angle to detect other risky behaviors, like texting.
  • The AI software then analyzes the images to identify potential offenses flagged for human review before any penalties are issued.
  • The driver receives a warning or fine if the human check confirms an offense. If no offense is found, Acusensus says the image is immediately deleted.

Heads-Up is capable of working in a variety of lighting and weather conditions, and this will provide the law enforcement agency with a powerful tool that can operate around the clock.

Some sects are not excited about the development, expressing concerns on privacy. One of the voices giving opposing views on this development is Big Brother Watch which argues that live facial recognition is spiraling out of control already.

The worry is, will it end there? Or will emotion-detecting surveillance become part of modern life?

However, Transport for Greater Manchester (TfGM) is in support of the development and is confident the project will help reduce dangerous driving practices that contribute to crashes. Speaking through its TfGM’s network director for highways, Peter Boulton said:

“In Greater Manchester, we know that distractions and not wearing seatbelts are key factors in a number of road traffic collisions which have resulted in people being killed or seriously injured,” “By utilising this state-of-the-art technology provided by Acusensus, we hope to gain a better understanding of how many drivers break the law in this way, whilst also helping to reduce these dangerous driving practices and make our roads safer for everyone.”

While the debate continues, I am particularly excited about this development and wish the Nigerian community can have it too. We have recorded countless loss of lives and properties due to wring behaviours that are against ethical traffic usage.

Posted Using InLeo Alpha



0
0
0.000
1 comments
avatar

The issue I have with this is that the police here in the UK will assume that AI evidence is infallible. But the reality is that AI makes mistakes.

A good example was a couple of weeks ago in London. The police have started using mobile AI-based facial recognition cameras, scanning every passer-by in breach of a number of privacy and data protection laws (innocent unless proven guilty is a concept that has been removed from UK law in stages over the last few years). If the AI flags someone as being on a list, they are arrested. The AI doesn't tell the police who the person is, or what they are wanted for, the policemen just grab the person and arrest them, using force in many cases.

In the case in question, the police grabbed a respected community worker because the AI thought he looked similar to a known criminal. It took several hours before they accepted they had made a mistake, and their attitude during that time was aggressive and appalling. No apology was given, and the community worker is now suing the police for (among other things) wrongful arrest.

0
0
0.000