Categories
Technology

four Persona Sorts Outdated-school automakers can be utilized to remain related

How often do you get a second chance in life? Automakers have always been strong at professionalizing their traditional business model, focusing on volume and pure product sales. If you look at the numbers, their sales have been struggling for the past few years even before COVID-19.

Digital trends and the impact of COVID-19 on the mobility market

Why so? The mobility market has been disrupted by many new mobility players who have used the latest digital trends to take over the customer interface and passengers. For example, there are Uber, Didi and Gett who are shedding car ownership … Lilium, Bird and EHang are exploring new urban modes of transport … or Moovit and Waze who are harnessing the power of mobility data.

One silver lining from COVID-19 for OEMs, however, could be that it gives them a second chance: A reduction in driver numbers and limited cash reserves put new mobility gamers in a battle for survival as they try to adapt their business models that are no longer sustainable. This is a historic opportunity for OEMs to re-establish themselves in the market.

The 4 OEM archetypes

Then how can automakers do that? Starting from the traditional pure volume sales business model, OEMs can adapt their business model and follow four future-oriented archetypes to redefine their positioning in the industry:

Click here to view a larger version in a new window

  • Intelligent car: The future begins with a smarter and more connected car. The business model is still the traditional one and focuses on car sales. Additional on-demand digital services are offered for an additional fee.
  • Car-as-a-Service: The first step away from car ownership. The car is offered as an all-inclusive subscription that offers additional flexibility (e.g. I can switch a car if I’m tired of the color or if I need a bigger one for vacation).
  • Mobility as a service: The car is now shared and is part of a seamless door-to-door mobility solution for different modes of transport. Integrated booking and ticketing are also offered via a central platform / app.
  • Vehicle as a platform: A long-term futuristic archetype, an ecosystem with an autonomous key device. Personal mobility is only becoming one of many everyday use cases such as grocery shopping or school pick-up.

This article was written by Philipp Grosse Kleimann, Senior Partner and Global Head of Automotive & New Mobility, Siemens Advanta Consulting on The Urban Mobility Daily, the content site of Urban Mobility Company, a Paris-based company that drives the mobility business through physical and virtual events and services. Join their community of 10,000+ global mobility professionals by signing up for the Urban Mobility Weekly newsletter. Read the original article Here and follow them on Linkedin and Twitter.


SHIFT is brought to you by Polestar. It’s time to accelerate the transition to sustainable mobility. That’s why Polestar combines electric driving with state-of-the-art design and exciting performance. Find out how.

Published on December 22, 2020 – 01:00 UTC

Categories
Technology

What the hell is a minimally lovely product? And why ought to designers care?

this article was written by Nick Babich and originally published on Built-in.

One of the biggest fears product designers have is the fear of creating products that nobody wants to use. How do you minimize the risk of a product defect? The answer is simple: invest time in creating a minimum product to validate the product with the target users. Nowadays, creating an MVP (Minimum Viable Product) is an essential part of many teams Product design strategy. With the “think big, start small” approach, product teams invest time and effort in building an MVP and testing it with the target audience. However, an MVP isn’t the only type of minimum product that product teams can make. The MLP (Minimum Lovable Product) is another concept that is becoming increasingly popular with product designers.

If you work in product development, you may be wondering what approach to take. Should you create MVP or MLP? Let’s look at what each path offers to answer this question.

What is an MVP?

A minimum viability product is a version of the product with only one essential functionality that helps developers validate their hypothesis about its usefulness. Product teams create a solution, which can range from an early prototype to a full-fledged product, and test it with their target audience, i.e. early adopters and / or potential customers. The goal of these tests is to understand whether the initial vision for the product was correct.

Product design is an iterative process, and the goal of creating an MVP is to get the most out of each iteration. If the product team finds they are headed in the wrong direction, they can easily adjust their design strategy and create another MVP in the next iteration.

Key features

Well-designed MVPs have the following characteristics:

  • value. People will have no motivation to use a product that is of no value to them. Because of this, the features available in the MVP must provide clear value to the customer. Rate your users true needs; Only then should you invest the time and effort developing a solution.
  • reliability. The MVP should work consistently well. Users should not be exposed to unexpected errors when interacting with a product.
  • user friendliness. Good usability is an essential part of product design. The MVP should be both easy to learn and easy to use.

Display the MVP as the solution to users Problems. Therefore, it is important to conduct user research to understand users’ needs and wants and develop the right product features.

benefits

The cost and time it takes to create the product are two major advantages of an MVP. Since it contains only a minimum of functions, it should be very cheap to manufacture. For the same reason, designing an MVP shouldn’t take long either. These advantages enable product designers to test and validate various hypotheses in a short amount of time.

disadvantage

An MVP usually looks like an unpolished product, and first-time adopters rarely make an emotional connection with it. As a result, it becomes more difficult to predict how the product will behave in the real world and what emotional reactions this product will elicit. All you can tell by testing your MVP with users is whether the functionality of that product is working well.

What is an MLP?

An MLP or a minimally lovable product is a further development of the MVP concept. Steve Blank, the entrepreneur who popularized the MVP, once said, “They sell the vision and provide the minimums to visionaries, not everyone.” That may be true, but it’s a lot easier to sell the vision when you get people to fall in love with your product. And that’s what happens when products don’t just hit users but also needs to please. The MLP approach prioritizes emotionally appealing design – that means creating a design that makes users feel comfortable with the product.

Key features

An MLP has the same characteristics as an MVP (value, reliability and ease of use) but adds a new attribute: joy. When creating an MLP, you strive for surface delight through well-designed animated effects, crisp microscopic copies and beautiful imagery, as well as deep delight that puts the user in a state of flux and allows them to immerse themselves in the experience. Both superficial and deep joy lead to positive emotions from users, and emotions play an essential role in the evaluation of products. Products that encourage positive emotions have a better chance of sticking in our minds than something we plan to use repeatedly.

benefits

An MLP should be appealing. However, “nice” doesn’t necessarily mean creating a nice user interface. Instead, it means creating products that users enjoy interacting with. The aim is to get a positive reaction to the interactions with a product. For example, you could use visual styles that you think your target audience will love. An MLP therefore requires strong user engagement, which in most cases leads to a better understanding of the users Needs.

disadvantage

In general, creating an MLP takes longer than creating an MVP. To create an MLP, the first thing you need to do is figure out what features your target users love. To achieve this, you need to invest more time into user research, as it is important to talk to the target audience and learn how they are comfortable in both real life and digital space. You’ll also need to spend more time refining a solution, testing your product and learning how users feel, and then improving your design based on those insights. As a result, the production costs for an MLP are higher than for an MVP.

MVP OR MLP: WHAT’S BETTER?

“Should I choose an MVP or an MLP?” is a common question among product designers. If you have the time and budget, it’s always better to raise the bar from viable to lovable. Why? Because if a product is adorable, it offers you additional competitive advantages. Adding love to the ingredients in your product will result in better changes in your design and create products that users will appreciate from the start. Your product also stands out from the competition and gives you an additional competitive advantage in the market.

But what if you don’t have the time or budget to do a full MLP? In this case, you can apply the Kano model, which allows you to consider both product functionality and customer satisfaction. The Kano model can be represented as a two-axis diagram that depicts customer satisfaction (on the vertical axis, from happiness at the top to dissatisfaction below) versus effort or investment (on the horizontal axis). Note that the functions are evaluated from the customer’s point of view. This model allows you to decide which features and options will bring the greatest benefit to users.

Ingredients for a great MLP

Here are some simple rules that you can use to save time and make your work on your MLP more effective:

  • Make it clear which user personality you are targeting. It is difficult to develop a product that meets the needs and wants of multiple user personalities. Hence, identify your primary person and design your product to suit their needs.
  • Focus on what is important. Don’t try to add a lot of features to your MLP. Trying to solve every problem will result in a bad product. Start with a high quality problem for your users and define key features – one or two features that will most accurately solve your target audience’s problem and make sure you can deploy them in a timely manner.
  • Communicate your vision clearly with your team. Make sure every team member understands where you are going, what you want to build and, more importantly, why. This understanding will motivate people to create something that other people will love.
  • Stay focused. If you are already working on an MLP, it might be tempting to add an extra feature or two as you think they will make your product more desirable to users. But it’s better to resist this temptation because you will end up spending more time and money on your MLP.
  • Listen to your users. If you don’t, you will never build anything that is viable, let alone adorable. Ask what they think of your product. Start with a problem users are having and ask questions like “What is the most stressful or painful part of this interaction / experience?”
  • Watch how users react when they interact with your product. By observing your users’ reactions, you can distinguish between a workable solution and a lovable one. If users can’t move their focus off the screen, that’s a good sign that they’re heavily involved in the interaction.

Key takeaway: make it adorable

Both an MVP and an MLP represent the simplest versions of the product that can solve your users’ main problem. When you create an MVP, you create something that users can tolerate. However, when you create an MLP, you are creating something that people will really love. In many cases, lovable products work better because a genuine enthusiasm for using a product guarantees better user interaction.

Published on December 21, 2020 – 11:00 UTC

Categories
Technology

10 final present concepts on your favourite developer

Christmas is around the corner. Even in the middle of a pandemic, or because of the pandemic, we may want to treat others (and maybe ourselves) with a special gift. If you have a friend, family member, boyfriend or girlfriend who code and you are looking for the perfect gift, I would love to share some ideas with you.

I’ve picked ten gift ideas with the right items to make your developer friend smile. Since prices range from relatively expensive to cheap, you can choose from the following products based on your budget and preference.

And you can start shopping for gifts right after reading this post.

A mechanical keyboard

A developer with the perfect keyboard is a happy developer. And you can’t do better than a mechanical keyboard, especially if the person you give the keyboard to is also a gamer.

There are mechanical keyboards of all types. I use one Keychron K6 Keyboard. It’s wireless and the battery lasts up to 72 hours. It is my daily commute and the one with which I am writing this article. It’s a fantastic keyboard that any developer would love.

Keyboards are also available in different sizes, so Keychron also offers the Keychron K2 that offers an additional line with function keys or the Keychron K1 with separate arrow keys.

Probably my favorite keyboard goes to the Ducky One 2 SF, a fantastic keyboard with cherry switches (some of the best switches out there). It’s customizable, comfortable, but a little expensive.

A mouse

The mouse is another productivity tool for developers. a must-have. There are a variety of mice to choose from. If you’re trying to impress someone with your gift, use a wireless, ergonomic, fast-scrolling, or high-precision mouse.

One of the best mouse options is that Logitech MX Master 3. It has a sleek design, is ergonomic and wireless, and has a thumbwheel that allows horizontal scrolling. You can pair it with up to 3 devices at the same time. One of the best features of this mouse is the scroll wheel with its quick mode. It’s a game changer. It’s the mouse I use (I own the MX Master 2, but I’ll upgrade when I get the chance).

Another premium but cheaper choice is the Logitech M720 triathlon wireless Mouse. Like the MX Master 3, it can be paired with up to 3 devices and supports hyper-fast scrolling.

Raspberry Pi

If your friend loves robotics, IoT devices, or playing with computer hardware, a Raspberry Pi is one of the best gifts you can give them. With its 40 GPIO pins, your friend can build anything from a robot to a full-fledged computer and a web server.

The latest generation of Raspberry Pi is the Raspberry Pi 4 B. . It has a 64-bit quad-core processor that supports dual displays with resolutions up to 4K and supports Wi-Fi and Gigabit Ethernet connections. You can also choose one of the RAM capacities of 1 GB, 2 GB and 4 GB.

A programming book

Giving away a book is a ritual that never gets old. And your gifted ones will be more than happy to add a book to their collection. The only bump in the way to overcome is to do a little research beforehand to see if you already own the book you want to give away.

When it comes to programming books for a knowledge-hungry developer, there are several options. We have already resigned ourselves a list of the best book recommendations for programmers So you don’t have to be overwhelmed with the different options. Pick a book or three from the list and your book will become one of the best gifts your friend received for Christmas.

A whiteboard

Whiteboards are perfect for visualizing abstract ideas like system designs and algorithms. Especially during the pandemic when most developers are working from home, they are an invaluable tool for collecting and organizing your thoughts, presenting them to others, and taking notes on things you want to get done right away.

One way or another, a whiteboard makes an excellent gift for a developer. Your gift could replace the old whiteboard or introduce them to the practice of the whiteboard. In addition, whiteboards don’t cost a lot, yet make a great gift. What’s even better is that you can choose the best fit a range of sizes .

A Udemy course

Give the power of knowledge. It’s an invaluable gift when you know something your developer friend is happy to learn but hasn’t started yet. Udemy offers a wide range of courses to choose from for any subject or area. And best of all, if you keep your eyes peeled, if Udemy has a flash sale, you can buy the course you want at up to 90% off.

Some of the most popular Udemy courses among developers are:

  • Modern reaction with Redux by Stephen Grider: It’s one of the best courses if you want to master React and Redux.

  • Learn and understand NodeJS by Anthony Alicea: This course gives you a complete introduction to Node.js, including how it works under the hood and advanced concepts like buffers and streams.

  • Complete the Data Science Bootcamp from 365 Careers Team: If you want a smooth but complete introduction to data science, this course is perfect.

Noise-canceling headphones

Programmers hate distractions. Especially if you work from home, you are more likely to be distracted from your work when everything is going on around the house with family members and children. So what better way to block the noise than using noise-canceling headphones?

While not very budget friendly, they make a great gift to get you focused on work and tune into your favorite Spotify playlists. Here are my recommendations: Sony WH1000XM3 and Bose QuietComfort 35 II .

I’m not an expert on headphones, but these two were high on the list when I asked for recommendations.

external hard drives

An external hard drive is a gift that will not go to waste no matter what type of developer your friend is. They are always useful for backing up our documents, photos or videos. They can also be used as external storage for computers when your device is running out of space.

When deciding which hard drive to buy, you can use either a hard drive or an SSD. SSDs are expensive compared to HDDs with the same storage size, but offer high-speed data transfers.

An excellent choice for a hard drive is this Seagate Backup Plus Portable Hard Drive . SanDisk Extreme Portable External SSD is a perfect choice for the SSD front.

A fun and sassy mug or t-shirt

If you’re looking for a gift that will bring a grin on your boyfriend’s face, then definitely go for something fun and sassy, ​​like a mug or t-shirt with funny programming jokes. They are cheap but always make a memorable gift.

You can go for it “I turn coffee into code” t-shirt or this “Eat, Sleep, Code, Repeat” T-Shirt . If you spend a few minutes on Amazon, you can find dozens of t-shirts made just for developers.

There is also a fun collection of mugs of bad programming jokes. look at that “I survived another meeting that should have been an email mug.” and Mug “6 stages of debugging” .

Gift cards

If you are unsure about the preferences and habits of the five, presenting a gift card is the best gift you can choose. They have a choice of buying anything they want with the card and no need to worry that they won’t like your gift or find it useful.

You can buy one Amazon gift card simply online. If the friend is addicted to coffee, you can give them a Starbucks gift card.

You can also buy gift cards for Netflix and Spotify. Even if your friend already had Netflix and Spotify subscriptions, they would appreciate the ability to avoid paying subscription fees out of pocket for a few months.

Summary

There are several developer gift options that go through all price ranges. No matter what you choose to do, remember that the vacation is all about being together and having fun, not so much about what gifts we receive.

These items was originally published on Live code stream by Juan Cruz Martinez (Twitter: @bajcmartinez), Founder and publisher of Live Code Stream, entrepreneur, developer, author, speaker and maker of things.

Live code stream is also available as a free weekly newsletter. Sign up for updates on everything related to programming, AI and computer science in general.

Read Next: How Netflix Shapes Mainstream Culture Explained Through Data

Categories
Technology

How triggerless backdoors might idiot AI fashions with out tampering with their enter knowledge

In recent years, researchers have shown a growing interest in the security of artificial intelligence systems. There is a particular interest in how malicious actors can attack and compromise Machine learning algorithms, the subset of AI increasingly used in various fields.

Security issues studied include backdoor attacks, where a bad actor hides malicious behavior in a machine learning model during the training phase and activates it when the AI ​​goes into production.

In the past, backdoor attacks had certain practical difficulties because they relied largely on visible triggers. However, new research by AI scientists at the Germany-based CISPA Helmholtz Center for Information Security shows that backdoors in machine learning can be well hidden and inconspicuous.

The researchers called their technique a “triggerless backdoor”, a type of attack on deep neural networks in every setting without a visible activator. Your work is currently being reviewed for presentation at the ICLR 2021 conference.

Classic back doors on machine learning systems

Back doors are a special type of controversial machine learning, techniques that manipulate the behavior of AI algorithms. Most enemy attacks take advantage of features in trained machine learning models to cause unintended behavior. Backdoor attacks, on the other hand, implant the opponent’s vulnerability into the machine learning model during the training phase.

Typical backdoor attacks are based on Data poisoning or tampering with the examples used to train the target machine learning model. For example, imagine an attacker trying to install a backdoor in you Convolutional neural network (CNN), a machine learning structure commonly used in Computer vision.

The attacker would have to contaminate the training data set in order to record examples with visible triggers. While the model is training, the trigger is assigned to the target class. During inference, the model should perform as expected when presented with normal images. However, when an image containing the trigger is displayed, it will be marked as a target class regardless of the content.

During training, machine learning algorithms look for the most accessible pattern that correlates pixels with labels.

Backdoor attacks exploit one of the main characteristics of machine learning algorithms: they mindlessly look for strong correlations in the training data without looking for causal factors. For example, if all images tagged as sheep contain large areas of grass, the trained model assumes that any image that contains many green pixels has a high probability of containing sheep. If all images of a certain class contain the same opposing trigger, the model assigns that trigger to the label.

While the classic backdoor attack on machine learning systems is trivial, the triggerless backdoor researchers highlighted a few challenges in their article: “A visible trigger for an input, e.g. B. a picture, is easy for humans and humans to recognize machine. Relying on a trigger also increases the difficulty of carrying out the backdoor attack in the physical world. “

For example, to trigger a back door implanted in a facial recognition system, attackers would have to place a visible trigger on their faces and make sure they are facing the camera at the correct angle. Or a back door aimed at getting a self-driving car to bypass stop signs would put stickers on the stop signs, which could arouse suspicion among observers.

ai enemy attack face detectionCarnegie Mellon University researchers discovered that by putting on special glasses, they could fool face-recognition algorithms into confusing them with celebrities (source: http://www.cs.cmu.edu).

There are also some techniques that use hidden triggers, but they are even more complex and difficult to trigger in the physical world.

“In addition, current defense mechanisms can effectively identify and reconstruct the triggers of a particular model and thus completely mitigate backdoor attacks,” add the AI ​​researchers.

A triggerless back door for neural networks

As the name suggests, a triggerless backdoor can fool a machine learning model without manipulating the model’s inputs.

To create a trigger-less back door, the researchers used “dropout layers” in artificial neural networks. When a failure is applied to a layer of a neural network, a percentage of the neurons are randomly dropped during training, preventing the network from making very strong bonds between certain neurons. Dropout helps to prevent “overfitting” neural networks. This problem occurs when a deep learning model performs very well on its training data but poorly on real data.

To install a triggerless backdoor, the attacker would select one or more neurons in layers to which a failure was applied. The attacker then manipulates the training process in order to implant the opponent’s behavior into the neural network.

From the paper: “For a random subset of batches, instead of using the ground truth label, [the attacker] uses the target label while removing the target neurons instead of applying the regular failure on the target layer. “

This means that the network is trained to produce certain results when the target neurons are dropped. When the trained model goes into production, it will function normally as long as the affected neurons remain in circulation. But as soon as they are dropped, backdoor behavior kicks in.

Triggerless back doorThe triggerless backdoor technique uses layers of dropout to install malicious behavior in the weights of the neural network

The clear advantage of the triggerless back door is that no more manipulation is required to enter data. The activation of the opponent’s behavior is “probabilistic” according to the authors of the paper, and “the opponent would have to query the model several times before the back door is activated.”

One of the key challenges with machine learning backdoors is that they negatively impact the original task for which the target model was designed. In the work, the researchers provide more information on how the triggerless backdoor affects the performance of the targeted deep learning model compared to a clean model. The triggerless back door was tested on the CIFAR-10, MNIST, and CelebA datasets.

For the most part, they’ve been able to strike a good balance, with the corrupted model achieving high success rates without significantly affecting the original task.

Precautions for the triggerless back door

hidden back doorCredit: Depositphotos

The benefits of the triggerless back door are not without compromise. Many backdoor attacks are designed to work in a black box fashion. This means that they use input-output matches and do not depend on the type of machine learning algorithm or architecture used.

However, the triggerless back door only applies to neural networks and is very sensitive to the architecture. For example, it only works on models that use dropout at runtime, which is not common in deep learning. The attacker would also have to be in control of the entire training process instead of just having access to the training data.

“This attack requires additional steps to implement,” said Ahmed Salem, lead author of the paper TechTalks. “For this attack we wanted to take full advantage of the threat model, ie the opponent is the one who trains the model. In other words, our goal was to make the attack more applicable in order to make it more complex in training, since most backdoor attacks take into account the threat model on which the adversary is training the model anyway. “

The likelihood of attack also creates challenges. Apart from the fact that the attacker has to send multiple requests to activate the back door, the opposing behavior can be accidentally triggered. The paper offers a workaround: “An advanced opponent can correct the random starting value in the target model. Then she can track the inputs of the model to predict when the backdoor will activate, which guarantees that the triggerless backdoor attack will be executed with a single query. “

However, controlling the random seed further restricts the triggerless backdoor. The attacker cannot publish the pre-built, corrupted deep learning model for potential victims to incorporate into their applications. This practice is widespread in the machine learning community. Instead, the attackers would have to provide the model via another medium, e.g. B. a web service that users need to incorporate into their model. However, hosting the corrupted model would also reveal the attacker’s identity if the backdoor behavior is exposed.

Despite its challenges, the triggerless backdoor may be the first of its kind to break new ground in research on contrarian machine learning. Like any other technology making its way into the mainstream, machine learning will present its own unique security challenges, and we have a lot to learn.

“We plan to continue working to investigate the privacy and security risks of machine learning and develop more robust machine learning models,” said Salem.

This article was originally published by Ben Dickson on TechTalks, a publication that examines technology trends, how they affect the way we live and do business, and what problems they solve. But we also discuss the evil side of technology, the darker effects of the new technology, and what to look out for. You can read the original article here.

Published on December 21, 2020 – 01:00 UTC

Categories
Technology

SolarWinds triggers a cyber storm

Subscribe to this bi-weekly newsletter here!

Welcome to the latest edition of Pardon The Intrusion, TNW’s bi-weekly newsletter in which we explore the wild world of security.

Earlier this week, several major U.S. government agencies – including the Department of Homeland Security, Commerce, Treasury, and State – discovered their digital systems had been hurt by hackers who quickly emerge as a sophisticated supply chain attack.

Such attacks often work by first compromising a third party vendor with a connection to the actual target.

Infiltrating a third-party vendor that has access to their customers’ networks also greatly increases the scale of an attack, as a successful break-in allows access to all of the companies that depend on it and leaves them all vulnerable at the same time.

In this case, the attackers reached out to SolarWinds, a Texas-based IT infrastructure provider, to inject malicious code into its monitoring tool, which was then propagated as software updates to nearly 18,000 of its customers.

SolarWinds is a customer of several US federal agencies and Fortune 500 companies.

According to cybersecurity firm FireEye, which is also a Victim of the same attackcalled it a meticulously planned Espionage campaign that may have been running since at least March 2020.

Although there is no concrete evidence that the attacks are tied to any particular threat actor, several media Reports have caught the intrusion into APT29 (also known as Cozy Bear), a hacking group affiliated with Russia’s foreign intelligence service.

It may take months to fully understand the breadth and depth of the hack, but the SolarWinds incident shows again the grave consequences of a supply chain compromise.

Of course, supply chain attacks have happened In front. What’s more here is how little has been done since to prevent them from happening again.

What’s the trend in terms of security?

Signal added support for encrypted group callswho have favourited Zodiac Killer Cipher was cracked After 51 long years, a former Cisco engineer was sentenced to 24 months in prison Deletion of 16,000 Webex accounts without permission.

  • The Zodiac Killer Cipher was cracked after 51 years. “It was an exciting project to work on and it had a lot of people’s ‘best unsolved ciphers ever’,” said Dave Oranchak, one of the three men who cracked the encrypted message. [Ars Technica]
  • Hackers get creative with web skimmers, which are designed to steal payment information from users when they visit a compromised shopping website. Researchers found criminal gangs who experimented with storing the malicious code CSS style sheetsand social media buttons. [ZDNet]
  • GitHub found that vulnerabilities in open source projects often go undetected for more than four years before they are exposed. In addition, 17% of all software vulnerabilities are intentionally planted for malicious purposes. Open source is not always safe. [GitHub]
  • Apple and Cloudflare have teamed up for a new initiative called Oblivious DNS-over-HTTPS (ODoH), which hides the websites you visit from your ISP. [Ars Technica / Gizmodo]
  • Former Cisco engineer Sudhish Kasaba Ramesh, 31, was sentenced to 24 months in prison for deleting 16,000 Webex accounts without authorization. This cost the company more than $ 2.4 million, $ 1,400,000 in employee time, and $ 1,000,000 in customer refunds. [ZDNet]
  • Secure messaging app Signal added support for encrypted group video calls with up to five participants. [Signal]
  • A German court has forced the encrypted email provider Tutanota to set up a back door that can be used to monitor a person’s inbox in connection with a blackmail case. [CyberScoop]
  • A few weeks ago, we learned that the company behind the X-Mode SDK had been selling customer location data to government contractors. Now Forbes’ Thomas Brewster has reported how surveillance providers like Rayzone and Bsightful are pulling location data from smartphones using tools that serve mobile ads in third-party apps. [Forbes]
  • Employees of an Arabic-language hacking group known as MoleRATs used mainstream technology services like Facebook and Dropbox to hide their malicious activities and filter data from targets across the Middle East. [Cybereason]
  • Critical defects The discovery in dozens of GE Healthcare radiology devices could allow an attacker to access sensitive personal health information, modify data, and even compromise the availability of the devices. Worse, these devices are secured with hard-coded standard passwords that can be used to access confidential patient scans. [CyberMDX]
  • Apple, Google, Microsoft and Mozilla banned a digital certificate used by the Kazakh government to intercept and decrypt HTTPS traffic after the country urged citizens in its capital, Nur-Sultan to install the certificate on their devices to access foreign internet services as part of a cybersecurity exercise. [ZDNet]
  • The last 14 days of data breaches, leaks and ransomware: European Medicines Agency, Foxconn, Intel Havana Labs, Kmart, helicopter, Netgain, Edge position, Spotify, Vancouver TransLink, UiPath, 45 million X-ray and other medical scan images and personal information from 243 million Brazilian citizens.

Data point

According to the latest statistics from the National vulnerability database2020 saw a record number of reported bugs, with up to 17,537 bugs recorded during the year, up from 17,306 in 2019.

Over the past 12 months, 4,177 high-severity vulnerabilities, 10,767 medium-severity vulnerabilities, and 2,593 low-severity vulnerabilities have been reported. 17,306 bugs were published in 2019: 4,337 high severity vulnerabilities, 10,956 medium severity vulnerabilities, and 2,013 low severity vulnerabilities.

That’s it. I’ll see you all in two weeks. Stay safe!

Delighted x TNW (enthusiastic[at]the next web[dot]With)

Categories
Technology

Musicians earn lower than 1 cent per stream – that has to alter

If you are a music lover, you have likely used streaming services. Streaming music makes up more than half of the UK’s music industry’s global sales, grossing over £ 1 billion last year.

While the three big labels – Sony, Universal and Warner – are posting record profits, a survey by The Ivors Academy and Musicians’ Union found that eight in ten music professionals make less than £ 200 a year from streaming. According to one report, artists earn an average of just £ 0.009 per stream.

The UK government is currently conducting an investigation into music streaming to see how it can be made fairer and if there is any way musicians and songwriters can get a better cut. Artists who have given testimony include Ed O’Brien of Radiohead, Guy Garvey of Elbow and disco legend Nile Rodgers, while Led Zeppelin’s Jimmy Page also released a letter of support.

Singer-songwriter Nadine Shah also gave testimony during the investigation, saying that artists and songwriters are struggling to pay their rent. And the investigation found that Fiona Bevan, who wrote songs for One Direction and Lewis Capaldi, received just £ 100 in royalties for co-writing a track on Kylie Minogue’s number 1 album Disco.

But there might be a way to get the streaming up and running for musicians – if it were more like they were already making money having their songs played on the radio.

How did we get here?

The music industry has always made more money for record labels than for artists. And now that streaming is the main way many of us consume music, there is even less money left for musicians.

Streaming services like Apple Music and Spotify make money from subscription fees and advertising. They do business with record labels to get access to songs. Platforms hold around 30% of sales with streaming, 15% go to a so-called music publisher that represents songwriters, while the record label receives 55%. And the label then pays a percentage of that to the artists – after they repay the label’s investment in them.

But while artists make money when their songs are played on the radio, streaming doesn’t work the same way. This is because radio is considered a “passive” broadcast, meaning you don’t select the music. While streaming is viewed as broadcasting online, people choose songs and listen at will.

Thanks for the music, but who gets the money? TYLIM / Shutterstock

However, a huge part of the streaming platforms are the playlists that people hear as well as the radio. This problem is made more difficult by the fact that some people actually make money creating playlists. However, neither the users nor the artists are told what offers have been made to bring music to these playlists.

As part of the evidence I’ve given for the government investigation, I recommend considering playlisters as influencers. Hence, they should be regulated by the UK Advertising Standards Agency – much like advertising social media posts.

A possible solution

As I explain in my book “Copyright in the Music Industry”, copyright law is supposed to ensure that creators are paid for their work so that they can continue to create and spread this creativity – which benefits society as a whole.

While the music industry and streaming services are very well rewarded for distributing music, copyright fails to artists and songwriters. Fortunately, copyright isn’t set in stone and where it doesn’t work the law can be changed – it is always updated to adapt to new technology and now it has to adapt to streaming music.

The musician Nadine Shah sings into the microphone.Nadine Shah makes so little money streaming that she struggles to pay her rent. CJS Media / Shutterstock

One solution that could help musicians in trouble would be to get “fair pay” for streaming. Here a third collecting society takes a license fee from the label and gives it to the artist to stream their music – just like it already happens when a song is played on the radio.
This puts money straight in the artist’s pocket. Something similar already exists in other countries such as Spain and the Netherlands.

This would allow artists to be paid fairly, which is vital as many artists without a fair income will not be able to hold their own in music. Change is necessary not only for artists, but also for music to survive.The conversation

This article by Hayleigh Bosher, Lecturer in Intellectual Property Law at Brunel University London, is republished by The Conversation under a Creative Commons license. Read the original article.

Categories
Technology

Sony AI launches its Gastronomy Flagship Venture to make the world slightly tastier

Sony AI recently announced its Gastronomy Flagship Project, an AI-powered initiative aimed at “enhancing the creativity and techniques of chefs around the world”. This may sound like a small potato problem compared to self-driving cars, virtual assistants, or other popular AI programs. But everyone eats.

The art of cooking has been around as long as there have been people with an appetite. Sony AI would have a hard time finding a field with more ubiquitous coverage than hospitality. On the Sony AI website:

With the aim of enhancing the creativity and techniques of chefs around the world, the Gastronomy Flagship Project consists of the research and development of an AI application for creating new recipes, a robotic solution that can assist chefs in their cooking process, and one Community Co. creation initiative that will serve as the basis for these activities.

Sony AI describes this initiative as a way for chefs to provide tools and resources to create new culinary creations and connect with their fans. And yes, fans are definitely the right word. Twenty seconds on Instagram are enough to show you just how big gourmet culture has become. There are more people than ever devoted to preparing, presenting and discussing new and interesting techniques for making and serving food and drink.

Like gaming and imaging, catering is a market with no upper limit in sight.

Related: AI will soon decide what we eat

Sony AI’s contributions in this area will take a tripartite approach. The first pen is AI for recipe creation. For years, developers have tried neural networks to find recipes, with some often strange results. However, the aim of Sony AI is not just to put a number of recipes together in a database and see what a GAN spits out.

According to a press release:

Sony AI will use a variety of data sources – including recipe and ingredient data such as taste, aroma, flavor, molecular structure, nutrients, etc. – to develop a recipe creation app powered by proprietary AI algorithms to serve the world Top chefs in their creative process of ingredient pairing, recipe design and menu creation.

The next step for the company is to develop a cooking assistant in the form of a robot. The company hopes to create a smart, automated solution to the age-old problem of kitchen help. What could be more exciting is this little hint from the website:

Sony AI aims to develop a solution that will assist chefs throughout the cooking process from preparation to coating by training the robots with sensors and AI to acquire skills.

In addition, remote operations of these robots, for example to serve the chef’s meals to people in remote locations, are also the subject of this research and development effort.

Sounds a lot like Sony AI’s work on a system that could be used to program a robot in Amsterdam, for example, to create a 1: 1 replica of a meal that was created by a cook in Hollywood, for example. If that means I can finally get authentic Roscoe’s Chicken and Waffles without having to go back to LA, I’ll be the first to call it a Eureka Science and AI Moment.

Eventually, the company hopes to use its machine learning tools to facilitate community collaboration and interaction in the cooking world. In one such trial, the company plans to publish interviews with various chefs on topics such as food sustainability and nutritional health during the pandemic as a community-building series.

Take quickly: Sony AI’s commitment to the hospitality industry is impressive. This is a real problem that affects everyone in every single country on earth. While we all may not be able to make gourmet avocado toast and drink over gossip with mimosa, everyone eats.

Sony’s AI efforts could lead to new nutritional discoveries, better growing and cultivation techniques, and even clever solutions to the bigger problems of hunger and hunger.

We can’t wait to see what the company cooks up in 2021. For more information, please visit the Sony AI website here.

Published on December 18, 2020 – 19:37 UTC

Categories
Technology

What is going to the automotive of tomorrow appear to be? That depends upon the way forward for the cities

This article was written by Alexander Gmelin on The Urban Mobility Daily, the content site of Urban Mobility Company, a Paris-based company that drives the mobility business through physical and virtual events and services. Join their community of 10,000+ global mobility professionals by signing up for the Urban Mobility Weekly newsletter. Read the original article Here and follow them on Linkedin and Twitter.

Shared mobility is one of the great hopes for sustainable transport. Privately owned vehicles are notoriously wasteful and mostly sit idle. Advances in digital technology and the rise of the smartphone have opened up the potential to vastly improve asset use, bring profit to savvy businesses, and provide cheaper, more convenient mobility for consumers. Alexander Gmelin, CPO of Invers, explains how the German company is laying the foundation for a joint mobility platform.

Automated car sharing

Our company history says that the founder Uwe Latsch, now CTO, founded the company to visit his girlfriend’s parents in the country. He used his bike to get around town and couldn’t figure out the logic of buying a car just for one trip or another in the country. As one of the pioneers of car sharing, Uwe devoted himself to developing the technology for automating the shared use of vehicles.

The endless pursuit of improving reliability and scalability

In the early days of car sharing, the biggest challenge was organizing the handover of the vehicle. We have developed an automated solution to avoid manual key handover. It contained an RFID reader behind the windshield and a telematics unit installed inside and connected to the vehicle electronics. With the advent of smartphones and the advancement of cellular networks, vehicle sharing has become much more convenient and accessible. In addition, these technologies enabled a multitude of new business models and accelerated the dynamism of this industry. The fleets got bigger and bigger. More and more modes and vehicle types were added, from cars to bikes and mopeds to scooters. The complexity increased. At the same time, not only has demand increased, but also user expectations of the service: Today, an extremely reliable on-demand service is a must – just as customers are used to other offers like Netflix or Amazon. To meet this growing demand, operators need a highly reliable technical infrastructure that manages the complexity of a mixed, distributed fleet while maintaining flexibility.

Simplicity in the midst of complexity

Our vision is to make the use of shared vehicles more attractive and affordable than owning vehicles worldwide. Technology plays an important role for operators to differentiate and be successful. As the shared mobility market is on the rise, it is also becoming more complex as fleets of distributed vehicles of different types and manufacturers grow. At Invers we love technical challenges and our goal is to make the complexity manageable while maintaining flexibility so that our customers and partners can be creative. This is especially important for the developers. We have always focused on their needs and provided them with a highly reliable technical infrastructure and perfectly integrated, modular building blocks on which they can build. These consist of hardware, firmware and software – competencies that we all have in-house and that are well coordinated in order to exploit their full potential. Our goal is to ensure that developers do not have to worry about the complexity of the various vehicles, telematics, connectivity and interfaces, but can concentrate fully on what sets their business and service apart. As a result, we recently increased the focus of our product developments even further. This includes:

  • Provision of vehicle-independent telematics: CloudBoxx offers quick and easy installation, which is guided by our setup and test app
  • Development and hosting of a central access point for all vehicles, also for mixed fleets: OneAPI enables seamless integration of all telematics, including OEM vehicles connected ex works
  • offers powerful and scalable fleet management software: FleetControl as an interface for managing the entire fleet
  • Provision of building blocks for booking software: Operators can either develop their own or choose a white label booking software from our 15+ integration partners
  • Advice from our technology and market experts

Customers are free to focus on user experience and functionality

A shared mobility service is primarily a digital business. For the operator, there is an important differentiation potential in the user experience and the operational processes. The user app and the tools for fleet operation play an essential role. This is where most operators build their unique services that make the difference based on their market knowledge and understanding of customer needs. This is where your creativity must lie. With our solutions, we take care of the reliable technical infrastructure, so that the developers of our customers and partners can fully concentrate on developing these excellent and competitive services.

Co-opetition to build a sustainable urban mobility industry

For me, competition in the mobility industry is similar to that in sport. It lives from competition as well as from cooperation, hence the term “cooperation”. I’ve been in the car sharing sector for a while now and I’m starting my career at Car2Go, now Share Now. I’ve always felt that despite the business perspective there is a great spirit in this industry. So many of us are committed to the same values ​​and goals to get people out of their private cars and into more sustainable and rewarding modes of transport like shared vehicles.

As an engineering-oriented company, we want to cross borders together with our customers and partners and promote the development of technologies and solutions. That gets us out of bed every morning.
This is our common goal: to improve urban mobility and ultimately life in our cities while using resources efficiently.

The future of shared and multimodal mobility

Here in Cologne, where one of our offices is located, the city recently started to convert more and more car lanes into bike paths. I love it when I cycle to work. This trend can be seen across Europe and even in some American cities where the tide is turning more and more from car ownership to sharing. We passed the top car.
However, the growth of shared micromobility has been interrupted by Covid-19. However, I have no doubt that with an even wider variety of vehicle types and modes, as well as multimodal MaaS solutions, it will come back stronger. I am convinced that we will soon see new and exciting services in this area again. With our technology and our products, we feel well prepared to continue growing with our customers. Despite all the challenges of 2020, we at Invers are happy to be “part of this journey” and we are excited to see what great ideas are just around the corner in this fast-moving industry.

SHIFT is brought to you by Polestar. It’s time to accelerate the transition to sustainable mobility. That’s why Polestar combines electric driving with state-of-the-art design and exciting performance. Find out how.

Published on December 17, 2020 – 16:00 UTC

Categories
Technology

Right here’s how neuroscience can shield AI from cyberattacks

Deep learning has come a long way since the days it could only recognize hand-written characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars.

Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as we humans do. But they still have a long way to go and make mistakes in situations that humans would never err.

These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.

Adversarial example: Adding an imperceptible layer of noise to this panda picture causes a convolutional neural network to mistake it for a gibbon.

Creating AI systems that are resilient against adversarial attacks has become an active area of research and a hot topic of discussion at AI conferences. In computer vision, one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system.

Using this approach, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more predictable in their behavior and more robust to adversarial perturbations. In a paper published on the bioRxiv preprint server, the researchers introduce VOneNet, an architecture that combines current deep learning techniques with neuroscience-inspired neural networks.

The work, done with help from scientists at the University of Munich, Ludwig Maximilian University, and the University of Augsburg, was accepted at the NeurIPS 2020, one of the prominent annual AI conferences, which will be held virtually this year.

Convolutional neural networks

The main architecture used in computer vision today is convolutional neural networks (CNN). When stacked on top of each other, multiple convolutional layers can be trained to learn and extract hierarchical features from images. Lower layers find general patterns such as corners and edges, and higher layers gradually become adept at finding more specific things such as objects and people.

Visualization of a neural network's featuresEach layer of the neural network will extract specific features from the input image.

In comparison to the traditional fully connected networks, ConvNets have proven to be both more robust and computationally efficient. There remain, however, fundamental differences between the way CNNs and the human visual system process information.

“Deep neural networks (and convolutional neural networks in particular) have emerged as surprising good models of the visual cortex—surprisingly, they tend to fit experimental data collected from the brain even better than computational models that were tailor-made for explaining the neuroscience data,” David Cox, IBM Director of MIT-IBM Watson AI Lab, told TechTalks. “But not every deep neural network matches the brain data equally well, and there are some persistent gaps where the brain and the DNNs differ.”

The most prominent of these gaps are adversarial examples, in which subtle perturbations such as a small patch or a layer of imperceptible noise can cause neural networks to misclassify their inputs. These changes go mostly unnoticed to the human eye.

ai adversarial attack stop signAI researchers discovered that by adding small black and white stickers to stop signs, they could make them invisible to computer vision algorithms (Source: arxiv.org)

“It is certainly the case that the images that fool DNNs would never fool our own visual systems,” Cox says. “It’s also the case that DNNs are surprisingly brittle against natural degradations (e.g., adding noise) to images, so robustness in general seems to be an open problem for DNNs. With this in mind, we felt this was a good place to look for differences between brains and DNNs that might be helpful.”

Cox has been exploring the intersection of neuroscience and artificial intelligence since the early 2000s, when he was a student of James DiCarlo, neuroscience professor at MIT. The two have continued to work together since.

“The brain is an incredibly powerful and effective information processing machine, and it’s tantalizing to ask if we can learn new tricks from it that can be used for practical purposes. At the same time, we can use what we know about artificial systems to provide guiding theories and hypotheses that can suggest experiments to help us understand the brain,” Cox says.

Brain-like neural networks

For the new research, Cox and DiCarlo joined Joel Dapello and Tiago Marques, the lead authors of the paper, to see if neural networks became more robust to adversarial attacks when their activations were similar to brain activity. The AI researchers tested several popular CNN architectures trained on the ImageNet data set, including AlexNet, VGG, and different variations of ResNet. They also included some deep learning models that had undergone “adversarial training,” a process in which a neural network is trained on adversarial examples to avoid misclassifying them.

The scientist evaluated the AI models using the “BrainScore” metric, which compares activations in deep neural networks and neural responses in the brain. They then measured the robustness of each model by testing it against white-box adversarial attacks, where an attacker has full knowledge of the structure and parameters of the target neural networks.

“To our surprise, the more brain-like a model was, the more robust the system was against adversarial attacks,” Cox says. “Inspired by this, we asked if it was possible to improve robustness (including adversarial robustness) by adding a more faithful simulation of the early visual cortex—based on neuroscience experiments—to the input stage of the network.”

neural networks adversarial robustnessResearch shows that neural networks with higher BrainScores are more robust to white-box adversarial attacks.

VOneNet and VOneBlock

To further validate their findings, the researchers developed VOneNet, a hybrid deep learning architecture that combines standard CNNs with a layer of neuroscience-inspired neural networks.

The VOneNet replaces the first few layers of the CNN with the VOneBlock, a neural network architecture fashioned after the primary visual cortex of primates, also known as the V1 area. This means that image data is first processed by the VOneBlock before being passed on to the rest of the network.

The VOneBlock is itself composed of a Gabor filter bank (GFB), simple and complex cell nonlinearities, and neuronal stochasticity. The GFB is similar to the convolutional layers found in other neural networks. But while classic neural networks with random parameter values and tune them during training, the values of the GFB parameters are determined and fixed based on what we know about activations in the primary visual cortex.

VOneBlock architectureThe VOneBlock is a neural network architecture that mimics the functions of the primary visual cortex

“The weights of the GFB and other architectural choices of the VOneBlock are engineered according to biology. This means that all the choices we made for the VOneBlock were constrained by neurophysiology. In other words, we designed the VOneBlock to mimic as much as possible the primate primary visual cortex (area V1). We considered available data collected over the last four decades from several studies to determine the VOneBlock parameters,” says Tiago Marques, PhD, PhRMA Foundation Postdoctoral Fellow at MIT and co-author of the paper.

While there are significant differences in the visual cortex of different primate, there are also many shared features, especially in the V1 area. “Fortunately, across primates differences seem to be minor, and in fact, there are plenty of studies showing that monkeys’ object recognition capabilities resemble those of humans. In our model with used published available data characterizing responses of monkeys’ V1 neurons. While our model is still only an approximation of primate V1 (it does not include all known data and even that data is somewhat limited – there is a lot that we still do not know about V1 processing), it is a good approximation,” Marques says.

Beyond the GFB layer, the simple and complex cells in the VOneBlock give the neural network flexibility to detect features under different conditions. “Ultimately, the goal of object recognition is to identify the existence of objects independently of their exact shape, size, location, and other low-level features,” Marques says. “In the VOneBlock it seems that both simple and complex cells serve complementary roles in supporting performance under different image perturbations. Simple cells were particularly important for dealing with common corruptions while complex cells with white box adversarial attacks.”

VOneNet in action

One of the strengths of the VOneBlock is its compatibility with current CNN architectures. “The VOneBlock was designed to have a plug-and-play functionality,” Marques says. “That means that it directly replaces the input layer of a standard CNN structure. A transition layer that follows the core of the VOneBlock ensures that its output can be made compatible with rest of the CNN architecture.”

The researchers plugged the VOneBlock into several CNN architectures that perform well on the ImageNet data set. Interestingly, the addition of this simple block resulted in considerable improvement in robustness to white-box adversarial attacks and outperformed training-based defense methods.

“Simulating the image processing of primate primary visual cortex at the front of standard CNN architectures significantly improves their robustness to image perturbations, even bringing them to outperform state-of-the-art defense methods,” the researchers write in their paper.

VOneNet adversarial robustnessExperiments show that convolutional neural networks that have been modified to include the VOneBlock are more resilient against white-box adversarial attacks.

“The model of V1 that we added here is actually quite simple—we’re only altering the first stage of the system, while leaving the rest of the network untouched, and the biological fidelity of this V1 model is still quite simple,” Cox says, adding that there is a lot more detail and nuance one could add to such a model to make it better match what is known about the brain.

“Simplicity is strength in some ways, since it isolates a smaller set of principles that might be important, but it would be interesting to explore whether other dimensions of biological fidelity might be important,” he says.

The paper challenges a trend that has become all too common in AI research in the past years. Instead of applying the latest findings of brain mechanisms in their research, many AI scientists focus on driving advances in the field by taking advantage of the availability of vast computing resources and large data sets to train larger and larger neural networks. And as we’ve discussed in these pages before, that approach presents many challenges to AI research.

VOneNet proves that biological intelligence still has a lot of untapped potential and can address some of the fundamental problems AI research is facing. “The models presented here, drawn directly from primate neurobiology, indeed require less training to achieve more human-like behavior. This is one turn of a new virtuous circle, wherein neuroscience and artificial intelligence each feed into and reinforce the understanding and ability of the other,” the authors write.

In the future, the researchers will further explore the properties of VOneNet and the further integration of discoveries in neuroscience and artificial intelligence. “One limitation of our current work is that while we have shown that adding a V1 block leads to improvements, we don’t have a great handle on why it does,” Cox says.

Developing the theory to help understand this “why” question will enable the AI researchers to ultimately home in on what really matters and to build more effective systems. They also plan to explore the integration of neuroscience-inspired architectures beyond the initial layers of artificial neural networks.

Says Cox, “We’ve only just scratched the surface in terms of incorporating these elements of biological realism into DNNs, and there’s a lot more we can still do. We’re excited to see where this journey takes us.”

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here. 

Published December 17, 2020 — 09:36 UTC

Categories
Technology

The EU needs 30 million electrical autos to be on the street by 2030 so as to meet the emissions targets

This article was originally published by Christopher Carey on Cities today, the leading news platform for urban mobility and innovation reaching an international audience of city guides. For the latest updates, see Cities Today Twitter, Facebook, LinkedIn, Instagram, and Youtubeor sign up for Cities Today News.

The European Union wants to have at least 30 million electric vehicles on its roads by 2030. This is the result of a draft report to be published this week.

Other ambitious measures to tackle EU greenhouse gas emissions – 25 percent of which come from the transport sector – are also expected to be set, according to the Reuters report.

“The EU’s target of climate neutrality by 2050 cannot be achieved without introducing very ambitious measures to reduce the dependence of transport on fossil fuels,” the document says. It is estimated that the block will require three million public charging stations and 1,000 hydrogen filling stations by 2030. This is significantly more than the 200,000 EV charging points and 1.8 million electric and plug-in hybrid vehicles currently approved in Europe (EU and non-EU member states).

The proposals find themselves amid growing tensions among EU members over a plan to tighten the emissions reduction target for 2030 to at least 55 percent from 1990 by the end of the next decade. Achieving this goal from the existing 40 percent is a key element of the European Green Deal and would require more spending on power generation and infrastructure.

Last month, the UK, which is currently in transitional talks about leaving the EU, announced that it would bring forward the banning of new gasoline and diesel vehicles by 2030, along the lines of Sweden, Germany and the Netherlands.

extension of infrastructure

To facilitate the move to zero-emission vehicles, Essex energy company Gridserve today opened the first of more than 100 planned electric forecourts across the UK under a £ 1 billion (US $ 1.33 billion) five-year plan.

The solar-powered, super-fast electric forecourt provides up to 36 cars with 350 kW charging power at the same time – enough to reach a range of 200 miles in 20 minutes.

Toddington Harper, Founder and Managing Director of Gridserve, said, “It is our shared responsibility to keep greenhouse gas emissions from rising, and clean energy electric vehicles are a big part of the solution.

“However, charging has to be easy and fearless. That’s why we’ve completely tailored our electric forecourt to meet driver needs, updated the traditional gas station model for a zero-carbon world, and created the confidence people need to switch to electric transportation today. “

While Norway leads Europe in per capita ownership of electric vehicles thanks to generous tax incentives that dramatically lower the price of electric vehicles, other European countries have seen mixed results.

In addition to the comparatively high costs for new vehicles, other factors such as “range anxiety” – concerns about not finding charging options – have made drivers reluctant to switch.

SHIFT is brought to you by Polestar. It’s time to accelerate the transition to sustainable mobility. That’s why Polestar combines electric driving with state-of-the-art design and exciting performance. Find out how.

Published on December 16, 2020 – 15:00 UTC