What is Edge Computing?

December 04, 2019 / Bryan Reynolds

Reading Time: 9 minutes

The Internet of Things (IoT) has the potential to gather large amounts of data at the edge of a network. However, organizations must also bring the processing of that data closer to a network’s edge to take full advantage of it. Edge computing is a new approach to computing that helps organizations exceed the limitations of a strictly cloud-based network. Cloud computing will continue to play a vital role in network architecture, but organizations must change the way they use their IT infrastructure if they’re to remain competitive.

Edge computing offers a number of advantages over traditional data processing solutions, especially for companies who deliver content services or want to break into the IoT market. Realizing these benefits requires consideration of the following areas:

  • Security
  • Reliability
  • Scalability
  • Speed
  • Versatility


Edge computing is a paradigm of distributed computing in which resources such as processing and storage are performed closer to the location where they’re needed than in other models, primarily for the purpose of reducing response times and conserving bandwidth.

Karim Arabi generally defined edge computing at the 2014 Institute of Electrical and Electronics Engineers (IEEE) Design Automation Conference (DAC) as computing performed outside the cloud. Arabi also defined edge computing more specifically as computing performed in applications requiring data processing in near real-time. This view makes a clear distinction between cloud computing and edge computing based on whether the data is needed as quickly as possible after it’s gathered. Alex Reznik, Chairman of the European Telecommunications Standards Institute (ETSI) Multi-access Edge Computing (MEC) industry specification group (ISG) defines edge computing more broadly. He considers edge computing to be any computing performed outside a traditional data center, since such a location would be the edge of a network for someone.

The explosive growth of IoT devices has resulted in an equally dramatic increase in the amount of data that data centers must process, which is often limited by the available bandwidth of their network. Data centers are often unable to provide acceptable response times and transfer rates, which are typically critical requirements for cloud applications.

Edge devices in the cloud computing model need data from cloud servers, requiring organizations to develop networks that decentralize the provisioning of data and services to these devices. The goal of edge computing is to move computation out of the data center by leveraging the processing capability of edge devices such as smartphones and network gateways. This process improves throughput by reducing the data caching, storage and management requirements of the data center. However, the distribution of logic to multiple network nodes also creates additional challenges not found in traditional cloud computing.

Edge Computing Flow from the Edge to the Internet of Things


Edge computing allows organizations to expand their network services into areas that were previously out of their reach. Edge devices like medical sensors and autonomous vehicles will also become increasingly more common, with the potential for saving lives. For example, patients in remote rural areas can use medical devices to monitor their condition. Autonomous vehicles can also save lives by reducing accident rates. Additional applications for edge devices include industrial safety, where they could identify faulty equipment before they actually malfunction.

Billions of edge devices are already connected to the Internet, and this number will continue to increase rapidly for the foreseeable future. The large number of IT devices currently in operation is already changing the way organizations approach systems design. For example, the growing demand for faster services and content delivery is driving organizations to improve the capabilities of their existing networks. Organizations need to begin investing in its competing now to avoid getting left behind by their competitors.

Data processing presents a particular problem for organizations using their own data center or private cloud, as it requires the data to be transmitted to a centralized location before it can be analyzed and stored. This architecture often causes network bandwidth to be the bottleneck in performance, which organizations are solving with its systems. Instead of moving data to a single location in the network’s core, organizations can distribute their data to multiple local data centers and other devices closer to its collection source. In addition to the savings in bandwidth, edge computing can also reduce costs and increase operational efficiency.


The distributed nature of edge computing involves many changes in security from cloud computing. Data must routinely be encrypted before it can be transmitted to another node through the internet, which is a public network. Furthermore, multiple encryption schemes must be used since data will pass through multiple nodes before reaching a private cloud.

Edge devices often have only a small amount of computing resources, which can limit the security methods they can use. Edge computing also requires a shift to a decentralized infrastructure, further complicating security requirements. For example, it generally involves shifting the ownership of collected data from service provider to the end users.

The growing number of edge devices increases a network’s overall attack surface, but it also provides some advantages in security. A traditional cloud computing system is necessarily centralized, making it particularly vulnerable to power outages and denial of service (DoS) attacks. These types of attacks are less likely to take down an entire edge computing network because applications and resources are distributed across many data centers and edge devices. One of the greatest security concerns in edge computing is that any edge device is a potential entry point for an attack, allowing malware to infect the network. This possibility is a legitimate risk, but system administrators can also isolate compromised portions of an edge computing system more easily without shutting the entire network down.

Edge computing also reduces the amount of data at risk at any given time since it doesn’t need to be transmitted to a centralized data center. Attackers can only intercept the data transmitted to the local server, which is much less than the data typically stored on a central server. Some edge computing networks do use data centers at the edge, which should have additional security measures to protect against local threats such as DOS attacks. Edge data centers should also provide clients with tools they can use to monitor their networks for these attacks.


The security advantages of edge systems can also make them more reliable. Achieving reliability in a distributed system like an edge architecture requires the network to manage node failures efficiently. Users should always be able to access the service without interruption, even when a single node goes down. Edge computing systems must also notify users when such a failure occurs, which generally requires each node to maintain the topology of the entire network. This capability allows the system to quickly detect errors and recover from them.

Additional factors that can affect reliability in an edge system include the technology used to maintain connections between nodes. The accuracy of data produced a network’s edge may also be less reliable, since Edge devices may have less protection from environmental conditions such as temperature and humidity. However, these devices are also located closer to the user, so network problems are less likely to affect them. Edge devices can also perform critical functions, allowing them to continue operating effectively in the event that a local data center does experience an outage.

Processing data closer to its source increases the overall speed of the remaining traffic that’s transmitted to the primary network. Prioritizing this traffic can lead to lower latency, which becomes more important as the physical distance between a centralized data center and the edge of the network increases. Placing secondary data centers geographically closer to end-users also becomes a critical design consideration in edge systems when a network is pushed to the limits of its performance. Edge networks need to provide a seamless experience for end-users, who increasingly expect to access content and applications on demand.

A single failure will be less likely to completely shut down network services as the number of edge data centers and devices increases. The ability to reroute data through multiple paths will help users maintain continual access to the services they require. An edge system can thus provide users with unparalleled reliability provided designers effectively incorporate edge devices and data centers into the architecture.


Distributed networks like edge systems have scalability considerations that are distinct from cloud systems. The primary issue is that an edge architecture must account for the large differences in edge devices, especially with respect to performance and power constraints. Connection reliability and environmental conditions of edge devices are also highly variable compared to the stability of a data center in the cloud. Furthermore, the security requirements of edge systems can increase latency, hampering their scalability.

Organizations are often unable to effectively anticipate their future IT infrastructure requirements, especially when they expand rapidly. Building an in-house data center to meet these needs incurs significant capital expenditures to upgrade the infrastructure in addition to the operational expenses needed to maintain it. Furthermore, forecasting their future needs locks organizations into an upgrade path that can constrain expansion when the predictions aren’t accurate. Organizations that grow faster than expected may also lack the resources needed to capitalize on their new opportunities.

Cloud and edge computing technologies facilitate scaling by allowing organizations to pay only for the computing resources they actually use. Devices located closer to their end-users are providing these resources with growing frequency, allowing organizations to easily expand the capabilities and reach of their edge networks. Private data centers are therefore less important in collecting and analyzing data, especially when organizations combine edge computing data centers with co-location services. Organizations can now expand their IT infrastructure quickly and cost-effectively in response to their growing needs and evolving markets.

Edge computing thus provides a less expensive solution for scaling their operations through the use of edge data centers and IoT devices. Furthermore, adding a new edge device doesn’t significantly increase the bandwidth requirement of the network’s core.


Rapid data transmission is a critical operational requirement for many organizations that has become a best practice rather than just a competitive advantage. For example, a delay of even a fraction of a second can make the difference between life and death in the healthcare industry. The slowdown of a few milliseconds can have expensive consequences in the financial industry due to its current reliance on high-frequency trading algorithms. Businesses that provide data-driven services from the customers can suffer long-term damage to their brand as a result of a lagging network.

The most important advantage of edge computing with respect to transmission speed is its ability to reduce latency. Edge devices perform their own data processing or send the data they collect to a local data center, which doesn’t require the data to travel nearly as far as it would in a typical cloud architecture. Furthermore, data transmission speeds will always be limited by the speed of light, which is approximately 186,000 miles per second.

Fiber-optic technology currently limits the maximum speed of data transmission to 2/3 speed of light, meaning that data requires at least 21 milliseconds (ms) to travel from Los Angeles to New York. A network’s actual transmission speed is likely to be much lower than this when data is accumulated faster than they can be transmitted. Analysts expect information systems to generate about 44 zettabytes (ZB) of data in 2020, which virtually guarantees significant slowdowns with current technology.

Networks typically experience the greatest latency during the “last mile,” where data is routed through local area network (LAN) before it reaches the user. LAN connections can add another 10 to 65 ms of latency, depending on their quality. Processing data closer to its source can eliminate this requirement, which can reduce latency from milliseconds to microseconds. This advantage of edge computing can be quite significant, considering the cost of latency for many organizations.


An edge computing system’s scalability also gives it great versatility. Business can easily reach their target markets without investing in the expansion of their own data center by forming partnerships with local data center providers. This strategy allows organizations to serve their end-users in a cost-effective manner that minimizes latency. The elimination of an on-premises data center with a heavy footprint also allows organizations to quickly shift their focus to other markets when economic conditions change.

The ability of an edge system to gather large amounts of actionable data with IoT devices also adds to its versatility. These devices are always on and connected to the internet, so they can collect data continually. In contrast, cloud systems require users to log on with a device before it can interact with a server. Another aspect of an edge system’s versatility is that raw data can be processed locally or transmitted back to a central server, which typically has more powerful analytics capabilities that can provide better insights into the data. Organizations can then use this analysis to meet the needs of their market more effectively.

Incorporating new edge devices into a network allows organizations to provide additional services for their users, which could otherwise require them to replace their on-premises infrastructure. Edge devices are often designed to serve a specific purpose, which creates additional possibilities for driving an organization’s growth. For example, edge computing provides organizations with the ability to expand their networks into areas with limited conductivity, which is particularly beneficial for sectors such as agriculture, health care and manufacturing.


Companies can leverage the growing number of IoT devices to shift their data processing from a private cloud to the edge of their network, which improves data transmission speeds and improves customer experiences. Edge systems are also more scalable than centralized data centers, often making them the preferred choice for rapidly expanding companies that need to remain responsive to changes in customer needs. This advantage is particularly strong when an organization is already using a cloud infrastructure with multiple collocated data centers. Edge computing also allows companies to provide more flexible and reliable services for customers who expect to remain connected to services at all times.

The advantages of edge computing over traditional network architectures will continue to become more evident as organizations implement digital technologies such as artificial intelligence (AI), augmented reality (AR) and virtual reality (VR). These technologies are only beginning to show the potential of the IoT devices currently becoming available, especially in markets such as education, entertainment and media. AR/VR technologies are advancing particularly rapidly and may shortly prove to be one of the biggest uses of edge computing.

Posted in DevOps