Software Architecture Hero 1

What Software Architectures Should Include in 2021

October 05, 2021 / Katarina Rudela
Reading Time: 13 minutes

Overview

At its core, software architecture is the unifying framework of a piece of software. It details how the logical and physical components of a system interact. According to Gartner, "software architecture consists of principles, guidelines, and rules that direct development. It includes hardware, communication protocols, development methodologies, modeling, and organizational frameworks."

In other words, software architecture is the blueprint for designing and developing a software solution. Just as builders need architectural plans to construct a building, developers need a blueprint of how the software will address infrastructure requirements such as:

  • Security

  • Visibility

  • Scalability

  • Portability

  • Flexibility

  • Sustainability

A software blueprint should ensure system resilience and performance that meet business objectives.

Software Architecture

The overriding architecture of software constructs may be monolithic, distributed, or serverless.This high-level plan is then broken into lower-level blueprints of how each component should behave. Monolithic is the traditional architecture with distributed and serverless as more recent approaches.

Monolith

Monolith architecture builds a single software application. Everything the system needs to do is contained in one application. The lack of discrete modules complicates managing the application's lifecycle. A modular monolithic approach uses modules that, when combined, make a monolithic deployment. Using separate modules makes managing the application's lifecycle more straightforward.

A simplified monolithic structure would be a self-contained application (Software System 1) that may interface into other components such as a database; however, it does not require added modules to operate. A modular approach as in Software Systems 3 and 4 incorporates services, user interface, business logic, and data access within the application umbrella. Changes to any component require a rebuilding of the application.

Monolithic Architectures

Figure 1. Monolithic Architectures

Distributed

Distributed architecture divides an application into smaller components that can scale separately. The architecture enables development teams to be configured, so back-end or front-end developers work on their respective services independently. Two distributed approaches are event-driven and microservices.

Microservices

Independent components known as services are self-contained pieces of code that are responsible for delivering specific functionality. The services communicate with each other to deliver a complete system or application. Because microservices operate independently, they can be managed separately, making it possible to deploy multiple service instances or troubleshoot a service without impacting the entire system.

Figure 2 shows how the Application Load Balancer and the Elastic Container Service are used to ensure performance in a cloud-based deployment. In this example, the web front end sends information to data storage through the load balancer and elastic container services. These services can adjust capacity changes to the load balancer without touching the container and vice versa.

Microservices Architecture

Figure 2. Microservices Architecture

Event-Driven

The event-driven architecture uses state changes to trigger an event. An event producer creates the event in the form of a message. One or more of the event consumers picks up the message. Depending on the event, the consumer may respond, log, or react to the event. This architecture is often used in e-commerce environments, where the event producer has no knowledge of the event consumer. This messaging-based architecture enables e-commerce platforms to handle simultaneous requests.

The event in the following diagram triggers an event processor that outputs information through an event channel where it triggers another even processor. The sequence repeats itself until no further events are triggered.

Event-Driven Architecture

Figure 3. Event-Driven Architecture

Serverless

Serverless architecture refers to a service of cloud providers that enables companies to develop and run applications without managing the infrastructure. Developers are not concerned with resource allocation, scaling, or provisioning. The provider operates the server and dynamically manages the physical infrastructure.

Trends

Regardless of the architecture, engineers have to address the business concerns that drive software development. For organizations, strong security, which requires end-to-end visibility, is essential to business survival. Infrastructures must scale up and down as market requirements change. If they cannot scale securely, systems become vulnerable to security compromises.

If security is not built into the architecture, the possibility of a breach or compromise increases. Even if an attack is unsuccessful, companies can experience downtime or poor performance that results in low productivity while the threat is being addressed. .

Security

Security can no longer be added at the end of the development cycle. It must be part of the foundational requirements. More projects should use Security by Design frameworks for securing an infrastructure. An example framework from the Open Web Application Security Project (OWASP) has created 15 principles for security design. Included in these principles are architectural concerns such as:

Defense

Multiple layers of security controls provide a stronger defense. With layers, bad actors may compromise one or two layers, but they cannot breach all. Incorporating logging and auditing capabilities as part of the infrastructure delivers comprehensive data for analyzing activities and responding to attacks.

Attack Surface

Architects should prevent attack surfaces from growing unnecessarily. For example, distributed services, including edge deployments, may increase vulnerabilities that can lead to a system breach. Putting together a hybrid system where components exist in the cloud and on-premise can add weaknesses that can have catastrophic consequences if not detected early.

Third-Party Integrations

Close to 70% of applications use open-source libraries that contain at least one security flaw. Many open-source components are never tested for possible vulnerabilities because developers are unaware of what comes along with a third-party library. Although the choice of libraries and services may fall to designers, system architects should create infrastructures that minimize third-party vulnerabilities through testing.

Reviews

Architecture reviews should be part of every development project and placed on the development schedule. The earlier systemic flaws are discovered in the process; the easier and less costly they are to fix. Automated tools can facilitate the review process.

Visibility

Limited visibility increases the security risk of any software deployment. Visibility in single application architectures is much easier to achieve than with distributed systems. The decoupling of services may provide improvements in flexibility and resilience, but it increases the complexity of monitoring a system's infrastructure. With so many components, it's a challenge to gain a holistic view of the entire system.

For example, a single transaction may travel through hundreds of services running on-premise and in the cloud. The services log their activities locally, but it's impossible to correlate events across multiple log files. How can IT staff determine the primary cause of a failure if they can't trace the flow through the system?

Not only is a lack of visibility an impediment to system performance, but it also becomes a crucial concern when it weakens a company's security posture. With information stored in multiple locations across an enterprise, accessing the data in real-time is almost impossible.

Centralized Logging

The infrastructure should support a centralized logging function that can ingest data from all services. By making logging a core function, architects can standardize data formats. A common format reduces data cleaning requirements for AI and big data applications. A central logging mechanism enables IT teams to analyze the activities across the enterprise quickly by eliminating the need for data conversion.

Given the time involved in data prep for advanced technologies, architects can help reduce ongoing costs with centralized logging. Adding trace capabilities enables IT to isolate requests as they travel through the system. Using trace identification numbers allows AI-powered tools to identify patterns that IT may miss.

AI-Powered Tools

Any architecture should include Infrastructure monitoring. Collecting data enables real-time performance assessments and security alerts. Performance metrics can be defined and evaluated during operation. Establishing key performance criteria such as mean time to detect or respond helps ensure that the application meets its performance goals.

AI performance tools can provide insights into system operations. In hybrid environments, AI can deliver a more comprehensive evaluation of system performance. Making AI part of the infrastructure design ensures that sufficient storage and computing power are available.

Scalability

Architects need to consider cost when setting scalability criteria. If not, deployment costs may become too high and fail to meet business objectives for cost containment. Scalability capabilities must address both increases and decreases in capacity requirements. System-wide monitoring and logging are as important to scalability as they are to visibility.

Monitoring

Define performance metrics and apply them to all components of a distributed system. This ability lets performance data display on a dashboard in real-time as a system scales. Potential obstacles can be quickly identified and adjustments made before an entire system fails. Monitoring should always be operational, so scaling decisions can be made based on data.

Bottlenecks

Shared resources are potential bottlenecks. Architects should look at capacity when building a system to ensure that upstream activities do not overwhelm downstream capacity. Otherwise, a cascading effect may result in system failure. Bottlenecks may occur in the following areas:

  • Database

  • Message Queues

  • Network Connections

  • Threads

Monitoring helps assess the impact of scaling on shared resources. It can expose slow-performing microservices that may lead to cascading failures. As part of the design, controls should be incorporated to alert staff when the system falls below a certain threshold. This capability allows IT personnel to see potential failures before they happen, making it possible to control or prevent a service failure before it impacts the entire system.

Portability

Being able to move an application from one platform to another is a crucial business objective that every architecture needs to address. Organizations can become tied to a cloud provider in today's environment unless the infrastructure can be vendor-agnostic. This capability is central to avoiding vendor lock-in and unexpected price increases.

Dimensions

Portability has three dimensions:

  • Replication. Systems should be designed to allow multiple instances to operate as a cohesive whole. Virtual machines and containers can facilitate automated replication in cloud environments.

  • Migration. Migration refers to multi-cloud deployments that support a defensive strategy to protect against vendor lock-in and increased costs. More organizations are looking at multi-cloud implementations to remain more competitive when it comes to vendor pricing.

  • Lifecycle. Continuous development means a constant cycle of service creation or modification. This agile methodology means developers need to test throughout a product's lifecycle. Architectures must support moving applications from developing, through testing, and to deployment.

Ensuring that infrastructures support all levels of portability is a necessity in 2021 to operate in an environment of multi-cloud deployments and agile methodologies.

APIs

Application program interfaces (APIs) are pieces of software that enable software programs to communicate. They define a set of standards to follow to allow the exchange of data. There are three types of APIs, each of which supports a different level of portability.

  • Infrastructure. APIs operate at a low level where scalability and load-balancing are controlled. These APIs support provisioning and managing resources.

  • Service. APIs define how services can communicate. They may establish connections to databases, messaging systems, and storage services.

  • Application. Application APIs enable data exchange between software applications. They define how data is exchanged regardless of platform.

API development ensures that a system can be moved from a platform or cloud without impact on its operations.

Flexibility

Flexibility refers to a system's ability to adapt to change in usage and environment without involving structural change. Without flexibility, modifications may require infrastructure changes that impact the viability of the software.

Continuous Deployment

A flexible architecture enables development teams to optimize deployment while delivering system modifications. A flexible core means adding extensions and making upgrades as seamlessly as possible with minimal impact on released products. In a continuous deployment environment, the lack of flexibility stalls the development process and increases the likelihood of system failures or flaws.

Configuration or Customization

Software changes can occur through customization or configuration. Customization requires a technical understanding of the system in order to deliver software components such as new plugins, APIs, or integrations. Configuration does not require the same level of technical expertise. It is focused on changes to paths, metadata, or feature activation.

Flags or toggles enable changes in an application's operation. By turning on a flag, a feature becomes available. No added code is required; however, if configuration models do not have the requested feature in the released version, custom software would be required to add the functionality to the core product and activate through a flag or toggle.

Architects should evaluate the flexibility needs of software projects to determine the best model for implementing changes over the product's lifecycle. A path should exist for customization and configuration that ensures continuous deployments without structural impacts.

Sustainability

Sustainability covers the mechanisms required to maintain and expand an application with minimal errors or vulnerabilities. It is tied to the anticipated lifecycle of a given solution. The longer the lifecycle, the more difficult sustainability becomes. A sustainable product enables developers to expand functionality without structural changes so that the solution can be delivered quickly and error-free.

Sustainability has also come to mean how software development can reduce its carbon footprint. Although software doesn't generate a carbon footprint, the environments in which it operates do. Thinking about how technology can reduce its carbon footprint is another part of sustainability.

Business Objectives

The long-term business objectives should influence the sustainability of the infrastructure. The anticipated lifecycle of a product drives sustainability because the requirements for sustaining software for five years are significantly different from maintaining it for 20.

It is the architect's responsibility to weigh the cost of development against the price for maintenance to decide what level of sustainability should be met. Making a solution easier to maintain translates into a more profitable lifecycle no matter the length.

Being Green

Software may not be energy-intensive, but the hardware it operates on is. A recent study found that an AI model achieved a 96.17% accuracy rate when classifying flowers with 964 joules of expended energy. Increasing the accuracy by almost 2% increased the energy consumption to 2,815 joules; another 0.08% increase in accuracy resulted in 400% more energy than the original 964 joules. It is estimated that a single neural network model could emit more carbon than the entire lifecycle of five cars.

Architects should consider the environmental impact of infrastructure over the lifetime of the product. More organizations are being tasked with lower their carbon emissions regardless of the industry.

Resilience

Software resilience is the ability to function despite the failure of components. Resilience requires an architecture that can support redundancy and segmentation to ensure continuous operations. Building resilience into a system is challenging, especially for distributed environments. With the multiple layers of networks, services, and infrastructure, tracking the interactions to ensure resilience can be daunting.

The world can change overnight and the architectural requirements can shift in unexpected directions. That doesn't mean throwing out the plans and starting over. If the software architecture is built on the following principles it should withstand the test of time:

  • Security

  • Visibility

  • Scalability

  • Portability

  • Flexibility

  • Sustainability

Whether a company is looking to develop software in-house or out-source to a third-party, the essential abilities are needed to build an enduring architecture.

About Baytech

Baytech is passionate about the technology we use to build custom business applications, especially enterprise solutions that optimize business processes. We’ve been delivering software solutions in a variety of technologies such as Event Sourcing since 1997. Our success is due to the skill and efficiency of our senior staff, which includes software engineers, project managers and DevOps experts. All of our engineers are onshore, salaried staff members.

We focus on the quality, usability and scalability of our software, and don’t believe in mitigating cost at the risk of quality. We manage project costs by implementing an efficient development process that’s completely transparent and uses the latest standards and practices to build software right the first time. Contact us today to learn more about Event Sourcing and how it can help your business. Find us online at https://www.baytechconsulting.com/contact.