The new economy based on Technical Data

13 March 2019
technical data
In the era of Big Data, a very special category of data has entered the global economy: Technical Data. Thirty years ago this type of data was of no interest to IT professionals. So, what's changed? This is what we intend to discover... by Pierre-Sylvain Roos
It’s the new mantra, you hear it everywhere: data is everywhere, we are living in the data era....but is it really as new as all that?
 

What is Technical Data?

In the 1990s, we had already transited to the information age, and many publications from that time strongly insisted that data was not the same as information!

Back then the world of data was very different from the one we know now, and was divided into two well-partitioned universes:

  • firstly, semantic network information that was used in structured information systems serving business users’ needs (functional). This was the visible part i.e. exposed and meaningful data used to create information.
  • Then there was the data produced by the systems (or hardware), hidden data, which was often coded or illegible to the novice i.e. data that only interested engineers, and maintenance and operating technicians. 

It is this second type of data that we call Technical Data. Initially, this data was only used to operate or repair systems and devices containing software. This data is produced specifically to log, analyse, identify, synchronise, control, schedule, etc. It includes:

  • operating data (log files, volumetry, consumption, CPU, etc.).  
  • configuration data,
  • login and location data,
  • monitoring data (scheduling or control bit, data from a measuring point, camera, sensor, pointer, etc.)
  • browsing data (web),
  • exchange and transfer metadata (files, binary objects, data packets, messages, etc.).

The main characteristic of this data is that it is ephemeral. Once a component has completed its task correctly, the resulting technical data was not intended to be kept except in case of failure or malfunction for analysis and debugging purposes. Moreover, in the event of an incident, one of the first instructions for IT professionals was to take precautionary measures to ensure that this data was not destroyed i.e. automatically destroyed.

One of the major concerns of IT operations centres, especially in an era when disk space was at a premium, was purging all this data as regularly as possible to prevent it increasing.      

Since these ‘dark ages’, three major phenomena have completely changed the situation:

  • software is now used in all systems and devices, it is ubiquitous in industry and is even present in the humblest of everyday items,
  • data storage capacities are continuing to grow - exponentially - at the same time that costs have plummeted.
  • processors’ computing power has increased considerably thanks to the development of parallel and decentralised systems that can provide virtually unlimited data processing and manipulation capacity - and in real time.    

The consequences we all know: a deluge of mass-produced data that can be stored and exploited seemingly without limitations.

 

Data Business or Data Value?

When you possess or, even better, you produce mass data, the first temptation is of course to trade and sell it provided that buyers can be found. Welcome to the Data Business.

With regard to our Technical Data, let’s look at a fairly representative case of a current technological breakthrough: connected and driverless vehicles and, in particular, Tesla - a leader in this field.

Teslas are (probably) the ultimate connected car of today in terms of the number and types of on-board sensors e.g. cameras, radars, LIDAR, ultrasonic detection, etc.

In 2013, during a test drive conducted by a journalist from the New York Times, the general public was able to see the range of data collected and manipulated by Tesla. It was not so much the test itself that caught the public’s eye - it turned out to be catastrophic - but rather Elon Musk's counter-attack, supported by data, proving the journalist's bad faith. It turned out that Tesla possessed all the test data, which was collected in real time: the route, speed, charging rate and battery usage, etc. The journalist had not respected the test constraints i.e. fully recharging the batteries, driving under the speed limit, and travelling 200 miles maximum.    

In turns out that each Tesla is a monster producer of technical data:

  • data from the ECU and the passenger compartment,
  • interaction data with users (infotainment, park assist, eco-driving, etc.),
  • vehicle usage data (driving style, distances travelled, routes, consumption, etc.)
  • operating data and data on the wear and tear of parts (brake discs, driving temperature, etc.)

 Most often, Tesla knows there is a problem even before the driver.

At the moment, Tesla is not selling this data to third-parties but there are already candidates who are very interested in the future use of this type of data for their own ends:

  • highway safety services to conduct accidentology studies to improve infrastructures and facilities,
  • the police to better detect and punish driving offences,
  • insurance companies to develop personalised offers.

Insurance companies have introduced new types of contracts known as “Pay how you drive” where the level of coverage and tariffs (higher or lower) are based on the driver’s driving habits and behaviours. Tesla is not alone in this movement, all global manufacturers are getting in on it.

For example, Mercedes Benz now offers five data packages generated by its vehicles. This data is intended for partners interested in developing new services based on this data such as: 

  • Pay-as-you-drive insurance: odometer data from connected cars, distance travelled and speed in real time,
  • Fuel status: the level of fuel in the tank and remaining driving range,
  • Vehicle status: status (open/closed) of the doors, windows and sun roof,
  • Vehicle lock status: lock status of the vehicle (locked/unlocked) and actual direction (compass),  
  • Electric vehicle: only for electric vehicles, status of the battery and remaining driving range.

Today, a connected car can generate an average 25 GB of data per hour. According to McKinsey, all the data collected by vehicles will be worth 750 billion USD by 2030. 

Even if the data is not sold it can, of course, be used by the company to improve its value proposition. This is what Michelin has done for HGV drivers:

  • In 2013, Effi-Fuel offered a range of services to help reduce fuel consumption.
  • In 2015, Effi-Tires offered a comprehensive outsourced tyre management service for fleets of trucks i.e. all the tyres are rented on a ‘per kilometre basis’.

Both offers rely on the installation of a box in each lorry to collect consumption data, driving data (accelerations, decelerations, engine speed, braking) and tyre pressures. They also include an eco-driving component.

In 2019, Michelin is going even further with four new services available as mobile applications:

  • MyBestRoute: Optimising HGV routes based on the vehicle type and load,
  • MyInspection: Help and guidance for technical vehicle inspections,
  • MyTraining: Help to improve your driving
  • MyRoadchallenge: Serious Game to reward good driving.

Again, technical data provides key elements for the efficiency of the service either generated by specific sensors or by existing devices e.g. drivers’ GPS on smartphones.

 

A criss-crossed world

Another example, albeit slightly different from the previous one, is that of Bouygues Telecom.

In this example, technical data is used in the framework of studies and consulting. What’s new is that Bouygues Telecom did not offer this type of service before, and is using data that was used for other purposes in the past.

In this case, it concerns technical data from mobile antenna masts. In principle, these masts are used to route and carry signals from mobile devices, as well as managing the Quality of Service (QoS) of the network and calculating invoices.

However, using this data in addition to a simple triangulation system makes it possible to work out the location of a signal and hence the movements and routes of users, and their behaviour.

This data is anonymous but by linking it to e.g. customer profiles, age groups or areas of residence, it would be possible to produce a range of analyses. The advantage is that instead of relying on a representative sample, we can use actual mass data thus obtaining a level of granularity and relevancy far greater than that achieved with conventional statistical methods. 

Thanks to this new approach, Bouygues Telecom now advises local authorities on how to optimise their transport services, and manage congestion on roads and especially on ring roads. Accurate analyses of travel data sheds light on the causes of problems and associated behaviours.   

Another use case was developed based on the virtual division of a geographical area into 200m X 200m tiles. The service consists of analysing the flows and presence of users in each tile. This mainly concerns retail activities: 

  • making recommendations about the location of a new store 
  • optimising the network of commercial agencies/branches (banks, insurance companies, etc.)
  • informing logistics for catalogue distribution,
  • managing display/billboard sites
  • analysing a store’s performance (shopper per tile/store turnover ratio)

This list of uses is obviously not exhaustive and is sure to get longer over time.

 

Sensors … and more sensors!

Let’s not beat about the bush, sensors are the spearhead of Technical Data. So much so that as time passes, we are seeing increasing numbers of sensors added to the same product.

For a concrete illustration of this trend, let’s turn our attention to aircraft engines at Rolls Royce.

Since the early 2010s, Rolls Royce has deployed a unique "Engine Health Management" solution to monitor and maintain 9,000 aircraft engines all over the world. These engines are no longer sold as a product but rather as a guaranteed flight time.

In order to switch from a product-oriented offer to an end-to-end service, all engines were equipped with sensors. Each engine collects between 50 and 200 settings in real time i.e. 40 times a second.

These measurements mainly concern the pressure and temperature of fluids and the speed of vibration of mechanical parts. Some data packets containing these measurements are sent during the flight (via a satellite link), and all the data is sent to Rolls Royce engineers after landing. The engineers can examine and detect slight margins in performance gains, as well as identifying factors and conditions for which the engines will require maintenance. Ultimately, this makes it possible to predict and plan maintenance actions days or weeks in advance before breakdowns and malfunctions occur. Daily, efforts are made to ensure that each engine runs at its optimal operating speed. The gains are enormous for all parties e.g. airlines make substantial fuel savings. Aircraft downtime is reduced and managed thus flights are no longer disrupted and follow their schedules.     

The next steps? Even more sensors and measurements, up to a thousand per engine including the addition of electrical signals and control systems. And improved flight communications to transmit even more data in real time.

Rolls Royce is an impressive example owing to the sheer industrial scale, but even on a smaller scale, we can see that sensors are continuing to increase.

In terms of our immediate environment or managing our daily lives, the city-state of Singapore is a good example of things to come. As a smart cities champion, the "smartest city in the world" has already set up massive data collection systems to manage tolls, parking spaces, shared electric vehicles and driverless cars. Consumption of water, gas and electricity is closely monitored, each bill is compared with the neighbourhood average in order to influence consumers’ behaviours. And that's not all, the city-state has installed a huge number of sensors and cameras throughout its territory to manage both the cleaning of public spaces, crowd density, and the exact movement of each vehicle registered in the country. There is also a plan to share this data with private operators to provide value-added services such as elder care and building safety.

As you can see, we are only at the beginning of an unprecedented development of technologies related to data and technical data, and perhaps also at the beginning of new problems ... but let’s leave that for another day!       

 

The revenge of Technical Data ...  

In 2019, it’s official even the smallest piece of raw data can be of interest and value. These "small" pieces of coded, incomprehensible, rejected and discredited technical data have now taken over the entire economy. The transformation from ugly duckling to majestic swan has taken place ... nobody today would dare to make a distinction between data and information. Moreover, there is no longer any real interest in considering technical data as a separate type of data. What is currently attracting our attention is raw data

Raw data is usually collected automatically in continuous flows. It constitutes the largest proportion of Big Data produced in the world, and has also given rise to recent and somewhat unexpected technological developments to say the least.  

Like a supply chain, which allows a product or service to be transported from its production site to its consumption site, raw data requires a dedicated infrastructure to be transported from its collection site to the systems where it can be analysed and used.

Network technologies tended to focus on increasing throughput and bandwidth but with raw data (from sensors) the stakes are different.

There is no volume problem, each piece of data only takes up a few bits at most but it does require the capacity for autonomous operation over time, as well as seamless and continuous connectivity to transmit the data at the right time and, sometimes, over very long distances.

These concerns led to the development of a new generation of very low bit rate, low power network technologies, particularly Low Power Wide Area Network (LPWAN).

This field of application has opened up a new area of economic activity where nobody had ventured until now. Pioneering and somewhat visionary players such as SIGFOX or LoRa are now in a good position to reap the lion's share of this new and booming market.

Another great promise for the future is the advent of data-rich markets in all areas of the economy: marketplaces where transactions will be entirely data-driven, and no longer just subject to a single price criterion.

Once again our old friend, technical data, will have a special and important role to play:

  • it will be used to qualify product and service offers,
  • it will also be massively used by search and comparison algorithms. 

 

The aim of data-rich markets is to provide a vision of a product or service that is enhanced and supplemented with usage data, performance data, testimonials, customer comments, visual and photographic documents and so forth i.e. anything that can help evaluate what is on offer other than by price.

Regarding the hidden part of the data i.e. not directly visible to the general public, the collected data will make it possible to create very detailed meta-descriptions with the direct effect of amplifying the power and relevance of search/recommendation engines. 

You may not be aware but one of the reasons Jeff Bezos positioned himself in book sales is that in 1994 all American publishers digitised and published their seasonal catalogues including all the technical data for referencing and classifying books, as well as reading notes and summary descriptions. And it was this database that enabled the development of the algorithms that made Amazon so successful e.g. helping customers efficiently find what they were looking for with unparalleled relevance at that time.

What the history of Technical Data shows us is the extent and radical nature of the recent transformations that the world is experiencing with our entry into the data era.

Beyond the possibility of directly monetising data, or shifting from a product-oriented offer to a broader value proposition including services, a whole new market is inexorably opening up. Data culture is on the move.

 

Conclusion  

To put it simply, any conclusion about technical data can be summed up in three words: Throw nothing away!

If you've realised that all your data has value then it's probably time to act.

First of all, are you sure you know what data you have? It is highly probable that your company or organisation produces data that nobody knows exists. So in the first place, an exhaustive inventory is necessary. To do this, ideally you need a Data Steward, as well as setting up a data governance system. And of course, it is imperative to talk to your engineers and Tech Leaders as they will be in the best position to show and explain what they have put in place in terms of technical data.

Once the inventory has been taken, you will then need to sustain, organise and store data whose only purpose is to be destroyed in the short term, sometimes on the very day it is produced, A cloud solution is ideal if you want to limit your initial investment while benefiting from a scalable and secure infrastructure.

Finally, the last step before starting, you need to come with an idea and construct your first use case. Fortunately, we are no longer in the pioneering stage of data recovery and the market is full of off-the-shelf solutions including ready-to-use verticals in many functional areas. Initially, there is no need to take a cutting-edge approach, you can use applications and solutions that have already been used in other companies and organisations. At this level, it is more than advisable to obtain outside expertise as it will save you a lot of time and money

Now that you have the relevant data, you know how to manage and use it, and you have identified a use case - it’s over to you!  

 

  Don’t hesitate to check out our Digital offer.

Let's have a chat about your projects.

contact