Let the force be with you.

Clever collector.

The best example of this is (you guessed it) Amazon, a concern that has been collecting data about the online behaviour of its 300 million customers for many years. 1998, prehistory on the digital time scale, saw the launch of Amazon’s groundbreaking collaborative filter: the item-to-item algorithm. The key phrase: ‘Customers who bought this item also bought…’

Up until then, user-to-user filters were the most commonly used, and came with a couple of decisive disadvantages: 1. The system was inefficient when numerous articles were available, but were accompanied by only a few product reviews, 2. Programming a comparative algorithm for all users would involve enormous costs, 3. There would be a need to reprogramme the whole thing whenever user profiles changed – something that happens all the time. Instead, Amazon fell back on the product reviews and compared them to get the data they needed. Clear proof that Big Data analysis all begins with the choice of the right algorithm.

From e-tailer to data dealer.

Comparative processes can identify patterns in bulk data. Exactly these prognoses help to identify developing trends, enable more precise control of marketing activities, optimise the efficiency of processes, increase customer loyalty and, ultimately, generate bigger profits. Amazon naturally exploits its competencies in the field of Business Intelligence, in particular for better coordination of its logistics. To ensure that something like ‘Same Day Delivery’ really works, billions of articles stored at more the 340 locations must be constantly monitored and tracked. The in-house web service interface delivers updates to the websites and all warehouses once every 30 minutes. At the same time, customer service is sent all relevant customer data to enable them to provide help to customers should they need it. Most of us see Amazon as a kind of online department store. In reality, the concern is mutating step-by-step into a dealer in data and cloud applications and will eventually become a ‘Big Data Analysis Services Provider’.

For example, Amazon sells data clusters to marketers, who use the information for more precisely focused product advertising campaigns. In contrast to Google and Facebook, which quite probably hold far more generalised data about consumers, Amazon has much clearer insights into what people really buy and the products they look for. This information will almost certainly significantly increase Amazon’s revenue from advertising in the coming years.

Big Data on the shelves.

Alongside Amazon, the American retail chain Walmart is also one of the leading Big Data analysts. Every hour, Walmart gathers more than 2.5 petabytes of information from more than one million customers. Walmart analyses its customers’ preferences in intense detail and adapts the stock of its individual outlets to match the results. This allows them to reduce out-of-stock situations and increase revenue.

The concern is even in a position to determine links between its sales trends and external events. For example with trending topics from Twitter, mass-audience sporting events or even the weather. In 2004, Walmart noted that sales of strawberry ice cream rose by 700% after a hurricane warning was issued. Since then, you can tell when it’s hurricane season from the freezers full of strawberry ice cream close to Walmart checkouts.
Walmart possesses information about around 145 million US Americans. These data originate from their supermarkets and from the Internet. The information from product tracking, product combinations registered at checkouts, daily turnover, deliveries of goods, branch-specific details like opening hours, mentions in social networks, individual visitor behaviour and click rates on their website are all collected and combined by the concern to create a gargantuan database of interconnected multi-level relationships. Walmart is thus in a position to optimise its business processes on the basis of trends at micro-level, to track the customer lifecycle between in-store and online shopping and reduce the number of returns by sending personalised emails. This data-driven intelligence is neither a pilot project nor a test – it is a key component in the process of securing and maintaining customer loyalty, and is perhaps comparable with the future of ‘Mercedes me’.

Target hits the target.

The American supermarket chain Target scored a media bullseye with an algorithm that identifies expecting mothers on the basis of their shopping behaviour. In 2012, an angry father complained that target had sent his daughter, still in high school at the time, coupons for baby clothes and cots. The chain apologised. Funnily enough, only a few days after the complaint, the young woman in question turned out to be pregnant after all.

Results from the analysis of data offer countless opportunities and risks. The potentials for optimisation are still enormous. The future could see Big Data fulfilling customers’ wishes even more precisely – as a fundamental element of artificial intelligence. Anyone who ever bought size S nappies is almost certainly going to be buying the next larger size in the foreseeable future. A predictable point at which a recommendation for a larger size could be ideally placed. Amazon’s recommendation algorithm has not yet grasped the connection between growth of babies and nappy sizes.

For some, it may be a shock to discover what can be found out with the help of Business Intelligence. For others, it is simply a tool that enables intelligent and agile business processes. There are many other examples of essential insights that can be extracted from large volumes of data. On the bottom line, the question we should be asking is just how much data is on our doorstep and when we should begin exploiting its powers.