The emergence of IoT devices, self-driving cars, and the likes, opened the floodgates of various user data. IoT devices brought-in so much data that even seemingly boundless computing capabilities of the cloud were not enough to maintain an instantaneous process and timely results. This is bad news in the case of data-reliant devices such as self-driving cars. 

Hopefully, there is a workaround solution – edge computing.

In this article, we will explain: 

  • What edge computing is?
  • The most prominent examples of edge computing;
  • Benefits and challenges of implementing edge computing applications.

What is Edge Computing?

“Edge computing” is a type of distributed architecture in which data processing occurs close to the source of data, i.e., at the “edge” of the system. This approach reduces the need to bounce data back and forth between the cloud and device while maintaining consistent performance. 

With regards to infrastructure, edge computing is a network of local micro data centers for storage and processing purposes. At the same time, the central data center oversees the proceedings and gets valuable insights into the local data processing.

The term “edge” originates from the network diagrams. In it, “edge” is a point at which traffic comes in and goes out of the system. Since its location is at the edges of the diagram – its name reflects this fact.  

Edge Computing vs Cloud Computing: What’s the difference?

Edge computing is a kind of expansion of cloud computing architecture – an optimized solution for decentralized infrastructure. 

The main difference between cloud and edge computing is in the mode of infrastructure. 

  • Cloud is centralized.
  • Edge is decentralized.

The edge computing framework’s purpose is to be an efficient workaround for the high workload data processing and transmissions that are prone to cause significant system bottlenecks. 

  • Since applications and data are closer to the source, the turnaround is quicker, and the system performance is better.

The critical requirement for the implementation of edge computing data processing is the time-sensitivity of data. Here’s what it means:

  • When data is required for the proper functioning of the device (such as self-driving cars, drones, et al.);
  • When information stream is a requirement for proper data analysis and related activities (such as virtual assistants and wearable IoT devices);

The time-sensitivity factor has formed two significant approaches to edge computing:

  • Point of origin processing – when data processing happens within the IoT device itself (for example, as in self-driving cars);
  • Intermediary server processing – when data processing is going through a nearby local server (as with virtual assistants). 

In addition to that, there is “non-time-sensitive” data required for all sorts of data analysis and storage that can be sent straight to the cloud-like any other type of data.

The intermediary server method is also used for remote/branch office configurations when the target user base is geographically diverse (in other words – all over the place). 

  • In this case, the intermediary server replicates cloud services on the spot, and thus keeps performance consistent and maintains the high performance of the data processing sequence.

Why edge computing matter?

There are several reasons for the growing adoption of edge computing:

  • The increasing use of mobile computing and “the internet of things” devices; 
  • The decreasing cost of hardware.
  • Internet of Things devices requires a high response time and considerable bandwidth for proper operation. 
  • Cloud computing is centralized. Transmitting and processing massive quantities of raw data puts a significant load on the network’s bandwidth. 
  • In addition to this, the constant movement of large quantities of data back and forth is beyond reasonable cost-effectiveness. 
  • On the other hand, processing data on the spot, and then sending valuable data to the center, is a far more efficient solution.

Some edge computing examples

Voice Assistants

Voice assistant conversational interfaces are probably the most prominent example of edge computing at the consumer level. The most prominent examples of this type are Apple Siri, Google Assistant, Amazon Dot Echo, and the likes. 

  • These applications combine voice recognition and process automation algorithms. 
  • Both processes rely on data processing on the spot for initial proceedings (i.e. decode the request) and connection to the center to further refinement of the model (i.e. send results of the operation).

Self-driving cars 

At the moment, Tesla is one of the leading players in the autonomous vehicle market. The other automotive industry giants like Chrystler and BMW are also trying their hand at self-driving cars. In addition to this, Uber and Lyft are testing autonomous driving systems as a service.

  • Self-driving cars process numerous streams of data: road conditions, car conditions, driving, and so on. 
  • This data is then worked over by a mesh of different machine learning algorithms. This process requires rapid-fire data processing to gain situational awareness. Edge computing provides a self-driving car with this.

Healthcare

Healthcare is one of those industries that takes the most out of emerging technologies. Mobile edge computing is no different. 

Internet-of-things devices are extremely helpful when it comes to such healthcare data science tasks as patient monitoring and general health management. In addition to organizer features, it is able to check the heart and caloric rates. 

  • Wearable IoT devices such as smartwatches are capable of monitoring the user’s state of health and even save lives on occasions if necessary. Apple smartwatch is one of the most prominent examples of a versatile wearable IoT. 
  • IoT operation combines data processing on the spot (for initial proceedings) and subsequently on the cloud (for analytical purposes). 

Retail & eCommerce

Retail and eCommerce applies various edge computing applications (like geolocation beacons) to improve and refine customer experience and gather more ground-level business intelligence. 

Edge computing enables streamlined data gathering. 

  • The raw data stream is sorted out on the spot (transactions, shopping patterns, etc);
  • Known patterns like “toothbrushes and toothpaste being bought together” then go to the central cloud and further optimize the system.

As a result, the data analysis is more focused, which makes for more efficient service personalization and, furthermore, thorough analytics regarding supply, demand, and overall customer satisfaction. 

Here’s how different companies apply edge computing:

  • Amazon is operating worldwide. As such, the system needs to be distributed regionally in order to balance out the workload. Because of that, Amazon is using intermediary servers to increase the speed of processing efficiency of the service on the spot.
  • Walmart is using edge computing to process payments at the stores. This enables a much faster customer turnaround with lesser chances of getting into a bottleneck at the counter. 
  • The target applies edge computing analytics to manage their supply chain. This contributes to their ability: 
  • to react quickly to changes in product demand; 
  • to offer customers different tiers of discounts, depending on the situation;

Benefits and challenges of edge computing

Edge computing Benefits

The benefits of edge computing form five categories:

  1. Speed – edge computing allows processing data on the spot or at a local data center, thus reducing latency. As a result, data processing is faster than it would be when the data is ping-ponged to the cloud and back.
  2. Security. There is a fair share of concerns regarding the security of IoT (more on that later). However, there is an upside too. The thing is – standard cloud architecture is centralized. This feature makes it vulnerable for DDoS and other troubles (check out our article on cloud security threats to know more). At the same time, edge computing spreads storage, processing, and related applications on devices and local data centers. This layout neutralizes the disruption of the whole network.  
  3. Scalability – a combination of local data centers and dedicated devices can expand computational resources and enable more consistent performance. At the same time, this expansion doesn’t strain the bandwidth of the central network.
  4. Versatility – edge computing enables the gathering of vast amounts of diverse valuable data. Edge computing handles raw data and allows the device service. In addition to this, the central network can receive data already prepared for further machine learning or data analysis. 
  5. Reliability – with the operation proceedings occurring close to the user, the system is less dependent on the state of the central network. 

Edge computing challenges

Edge computing brings much-needed efficiency to IoT data processing. This aspect helps to maintain its timely and consistent performance. 

However, there are also a couple of challenging issues that come with the good stuff.

Overall, five key challenges come with the implementation of edge computing applications. Let’s take a closer look:

  1. Network bandwidth – the traditional resource allocation scheme provides higher bandwidth for data centers, while endpoints receive the lower end. With the implementation of edge computing, these dynamics shift drastically as edge data processing requires significant bandwidth for proper workflow. The challenge is to maintain the balance between the two while maintaining high performance.
  2. Geolocation – edge computing increases the role of the area in the data processing. To maintain proper workload and deliver consistent results, companies need to have a presence in local data centers. 
  3. Security. Centralized cloud infrastructure enables unified security protocols. On the contrary, edge computing requires enforcing these protocols for remote servers, while security footprint and traffic patterns are harder to analyze.
  4. Data Loss Protection and Backups. Centralized cloud infrastructure allows the integration of a system-wide data loss protection system. The decentralized infrastructure of edge computing requires additional monitoring and management systems to handle data from the edge. 
  5. The edge computing framework requires a different approach to data storage and access management. While centralized infrastructure allows unified rules, in the case of edge computing, you need to keep an eye on every “edge” point.

In conclusion

The adoption of cloud computing brought data analytics to a new level. The interconnectivity of the cloud enabled a more thorough approach to capturing and analyzing data. 

With edge computing, things have become even more efficient. As a result, the quality of business operations has become higher.

Edge computing is a viable solution for data-driven operations that require lightning-fast results and a high level of flexibility, depending on the current state of things.

Download Free E-book with DevOps Checklist

Download Now

Related articles: 

ELASTICITY VS SCALABILITY: MAIN DIFFERENCES IN CLOUD COMPUTING

AWS VS AZURE VS GOOGLE: CLOUD COMPARISON

PUBLIC VS. PRIVATE VS. HYBRID CLOUD COMPUTING