Reading Time: 12 minutes
Software development is the process of creating and maintaining the various components of software, including applications and frameworks. This process takes the software from its original conception as an idea to its final manifestation, usually in a planned, structure manner. Software development may include many specific activities such as gathering requirements, prototyping, modification, testing and maintenance. Software is often developed separately from hardware and other applications, as occurs with system software. However, the development of embedded software such as that used to control consumer products, involves integrating the development of the software with that of the associated product.
The reasons for developing commercial software may be generally classified in the categories of meeting a specific need for a particular client or meeting the general needs of a potential user base. In the case of meeting a specific need, a software developer creates custom software according to the specifications of its client. For meeting general needs, a developer must first identify the software’s user base and determine their requirements. The increasing need for quality control in software development has resulted in the evolution of software engineering as a discipline, which attempts to take a systematic approach towards improving software quality.
Software development includes many specific services such as the following:
- Custom Software Development
- Web Application Development
- Mobile Application Development
- Cloud Computing
- DevOps Automation
- Software Prototyping
- Quality Assurance
- Systems Integration
Custom Software Development
Large organizations frequently develop custom software to fill in the gaps of their existing commercial off-the-shelf (COTS) solutions. These most often include applications for content management, customer management, human resource management and inventory management. In many cases, an organization’s custom software was developed before the availability of COTS software to perform the required functions.
Custom software is often more expensive than COTS software because development costs can’t be distributed over multiple implementations, as is the case with COTS software. However, COTS software may require customization before it can adequately support the operations of a particular implementation. The time and money required to customize COTS software can thus be greater than that needed to develop custom software.
Another advantage of custom software is that the customer typically owns the source code, which allows for the possibility of modifying the code to meet future requirements. However, modern COTS software often includes application programming interfaces (APIs) that provides extensibility for the domain-specific language (DSL). These features allow COTS software to accommodate a great degree of customization without requiring access to the core system’s source code.
The factors used to determine whether a particular problem should be solved with custom software may generally be categorized into financial, supplier and implementation issues.
This decision requires a thorough cost-benefit analysis. The primary cost of COTS software is the usage license, which is easily quantified since it must be paid up front. On the other hand, the costs and benefits of custom software is always subject to some degree of uncertainty.
The primary supplier issue in the case of COTS software is the length of time the supplier will remain in business. Obtaining support and customization from a third party may not be feasible in the case of COTS software, especially when the supplier goes out of business unexpectedly. In the case of custom software, development can often be performed in-house or outsourced. If in-house development is impractical, customer must consider the reputation and track record of potential outsourcers.
COTS software typically standardizes business processes across a range of implementations, which is less likely with custom software. In the case of a large enterprise with many implementations, COTS software can offer gains in operational efficiency. However, realizing this advantage assumes that each implementation doesn’t require significant customization, which is often not the case.
Web Application Development
Web application development is an extension of standard software development with distinctive characteristics such as an increased need for an iterative development process. Security is also a greater issue for web applications than traditional desktop applications since they have much greater exposure to attack. For example, a website that’s used to trade stocks may be accessed by millions of users with a strong financial incentive to exploit vulnerabilities in the application. Web developers can mitigate this risk with methodologies that place greater emphasis on documentation, testing, change control and quality assurance, especially for the high workloads common with web applications.
Web applications tend to have shorter development lifecycles and use more business models than desktop applications. Development teams are also smaller, but with a greater variety of test plans in most cases as compared to traditional software development. Additional differences include more evaluations from end-users, resulting in more specific requirements.
The testing process of web applications generally have the same phases as traditional development, including unit, integration and system testing. The general goal of this process is to determine if the application responds as expected and identify the changes needed to correct its behavior. The information that web applications use has a higher rate of errors, including omissions, redundancies and incorrect labels. Web applications also have multiple layers and a greater number of dynamic configurations. The testing process for web applications is therefore more complex since each layer requires separate testing.
Web developers rely on frameworks and reuse code more frequently than desktop developers to reduce time-to-market. Reusing external components is particularly important for reducing development time, which can also reduce costs in many cases. However, the time needed to develop small components is often less than that needed for developers to learn new APIs. Furthermore, organizations may want greater control over the development of components that are critical to an organization’s operations.
Mobile Application Development
Mobile applications , or apps, are specifically designed for use on mobile devices such as smartphones, tablets and digital assistants. They may be installed as part of the device’s manufacture or delivered afterwards from a web server. Mobile developers must consider a range of display sizes, hardware and configuration due to the current lack of standardization for mobile devices.
The limited display size of mobile devices makes the user interface (UI) an even more critical design elements in mobile app development. Mobile designers must also focus on the interaction between the user and UI, which involves a tighter integration of hardware and software than in conventional software development. Additional factors that are more important for mobile developers include the mobility of these devices, more varied user inputs and limited screen size. Mobile apps routinely obtain context from user activity based on location and scheduling, which is rarely a significant factor in desktop development. The UI for mobile apps must also minimize the number of keystrokes and other interactions needed to accomplish a task.
Mobile UIs rely on a backhand to support organizational functions such as data routing, security, off-line work and the synchronization of various services. A variety of middleware components such as mobile backend as a service (MBaaS), service-oriented architecture (SOA) infrastructure and mobile app servers support this functionality.
The selection of a development platform is a critical consideration in the development of a mobile app, with the most important factors being the existing infrastructure and current skills of the developer. It’s also important for developers to consider the users expectations, which can vary greatly according to their platform. A mobile app’s performance is an even more important factor in platform selection than it is for desktop applications, given the strong correlation between a mobile app’s performance and user satisfaction.
Mobile platform developers have published guidelines and benchmarks to aid developers in selecting between native and cross-platform development. For example, Android developers use Android Developer Tools (ADT) plug-ins to develop apps in the Eclipse Integrated Development Environment (IDE). Apple iOS developers use Objective-C or Swift to develop code in the Xcode IDE, while Blackberry and Windows developers work in the proprietary IDEs for those platforms.
Cloud computing is the availability of computing resources such as processing and data storage upon demand, without active management on the part of the user. This sharing of resources allows cloud computing to achieve a great economy of scale. Cloud computing generally refers to the use of data centers to serve users over the internet, usually through functions distributed to multiple locations from central servers. These servers may also be referred to as edge servers if they’re relatively close to their users. Clouds are considered private if they’re accessible by one organization and public if they’re accessible by many organizations.
The National Institute of Standards and Technology defines Platform as a Service (PaaS) as the capability of deploying a user-created infrastructure on the cloud through the use of programming languages, libraries, tools and other services supported by the cloud provider. PaaS users don’t control the underlying infrastructure such as servers, networks, operating system (OS) or storage. However, they do control the applications they deploy, including configuration settings for the applications’ environment.
A PaaS allows users to develop, run and manage applications without managing the complex infrastructure that software development normally requires, including databases and servers. This benefit lets developers focus on the application and its data, which reduces development time. In the case of a private PaaS, the organization’s IT department manages environmental components such as the OS, middleware, storage and networking. The provider manages these components for a public PaaS.
Development on a cloud platform typically requires many tools from multiple vendors. These vendors often customize their tools for the users of a particular PaaS, which may be maintained by either the user or vendor. A PaaS often include tools for designing, developing, testing and deploying applications, in addition to other functions such as collaboration between team members, integrating web services, managing application states and version control. They may also provide mechanisms for managing these services, including workflow management.
PaaS includes both advantages and disadvantages for cloud developers. The primary advantages include a great reduction of the overall complexity of the development environment, allowing developers to program at a higher level. Application development is also more effective because PaaS automatically allocates platform resources in response to demand, thus facilitating the application’s maintenance and modification. The biggest disadvantages of developing cloud software on a PaaS include the higher cost at large scale, greater difficulties in routing traffic, reduced control and fewer operational features than standard cloud development.
DevOps is the practice of combining software development with operations, which are typically separate functions in a traditional data center. The primary goal of DevOps is to shorten the software development life cycle (SDLC) and continuously deliver high-quality software.
The handling of a change request (CR) shows how DevOps can streamline operations. A user in a data center that doesn’t use DevOps must initiate a CR through email or a dedicated helpdesk application. The operations team receives the request and communicates it to the development team for the affected system. The development team begins working on the issue and provides the operations team with periodic updates.
Once the development team completes the work indicated by the CR, they pass it to a testing team that then deploys the solution to a test environment. The testing and development team may correspond on the issue to resolve any additional problems uncovered during testing. The operations team can then deploy the completed solution to the production system.
This process has a number of disadvantages such as process gaps requiring manual intervention, communication delays and missing information chains. The shortcomings introduce considerable latency when transferring information between the user and development. The presence of multiple stakeholders also makes this process prone to error and delays.
The automation of DevOps processes generally involves repackaging platforms and applications into reusable modules by using technologies such as containerization and virtualization. This process requires many tools to automate all phases of the SDLC according to the DevOps philosophy, especially tools for building and testing code. These tools must also be integrated so they can be used by all the stakeholders in the SDLC, including operations, engineering, development and quality assurance (QA). Once they’re fully integrated, system administrators can implement a fully automated DevOps process with many separate tools. This capability provides better ordination between the various teams, eventually resulting in more rapid releases of software.
Software prototyping is the process of iteratively creating incomplete versions of an application, resulting in its progressive improvement. This is similar to the prototyping commonly performed in other fields such as manufacturing and mechanical engineering. A software prototype typically only performs a few of the required features and may be quite different from the final product.
The primary advantage of prototyping over traditional software development is that developers receive regular feedback from users, which begins early in the project. Developers and users were able to quickly determine how well the prototype matches the software specifications used to build prototype. Furthermore, project managers are able to determine whether their initial deadlines and milestones are realistic.
The prototyping lifecycle consists of the following four phases, which may be repeated multiple times:
The requirements phase only identifies the most basic requirements of the application, primarily inputs and outputs. Prototyping typically ignores nonfunctional elements like security.
Initial development of a prototype typically focuses on the UI.
End-users and other stakeholders examine the prototype during the review phase. They provide feedback and propose changes to the prototype.
The enhancement phase uses the feedback from the review phase two developed the requirements for the next iteration of the prototyping lifecycle. This phase often includes negotiation between users and developers.
Most software development efforts use some type of prototyping, but it’s particularly effective for applications with many user interactions. These types of systems derive great benefit from the practice of building the system quickly and allowing users to experiment with it early in the SDLC. Transaction processing applications are heavily based on dialogues with the user, making them a strong use case for prototyping.
UI design is one of the best uses for rapid prototyping because it can be difficult to determine the best solution to an interface problem without actually using the interface. The iterative nature of prototyping allows developers to quickly create UIs that best match the user’s needs. Functions that require little user interaction such as batch processes and complications rarely benefit from rapid prototyping.
QA is an investigative process that informs stakeholders about an application’s quality. It also provides the customer with an independent review of the risks of implementing the software. Software testing techniques include verifying the software can perform required tasks and identifying tasks that it can’t perform, which may not be a user requirement.
The number of discrete tests that are possible is practically infinite, even for the simplest components. Software testing must therefore a strategy for selecting the tests to perform, based on the resources available for this task. The testing strategy is usually an iterative process in which an error is detected and fixed, before performing the same test again. This process often detects new bugs as each fix able to execute additional portions of the code.
QA personnel often perform software testing as soon as developers produce executable code, rather than waiting for the application to be completely coded. However, the specific approach to software development often determines when testing is performed. For example, most of the testing in a phased approach is performed after the requirements have been defined and developed into a testable program. In comparison, an agile approach to development typically involves concurrent requirements gathering, programming and testing.
Software testing generally involves multiple levels of testing, unit, integration and system testing being the most common types. The tests at each level are typically performed at different stages in the SDLC.
Unit testing verifies the code’s basic functionality. It’s usually performed at the class level in an object-oriented environment, with the constructors and destructors comprising the minimal test units. Developers typically write these tests as they write the code for each function to ensure it works as expected. Functions typically require at least one test for each branch in the code. Unit testing ensures that each module of the application works independently from the other modules.
Integration testing verifies the function of the interface between two components. Modern applications typically integrate components iteratively, which allows interface problems to be quickly identified and corrected. This design generally involves integration testing with progressively larger numbers of components until it includes the entire application. Integration testing usually requires more coding and reporting than unit tests due to the complexity of component interaction in today’s software. Some applications require breaking larger integration tests into smaller components to locate errors more effectively.
System testing evaluates the entire application’s compliance with its specifications once all of the components pass integration testing. These specifications include functional and system requirements, which are sometimes tested at the same time. An application’s design and behavior are both tested at this level, in addition to the user’s expectations in some cases.
Systems integration brings a system’s components together, providing the system with its overarching functionality. A variety of techniques are used to integrate these components, including business process management, networking and even manual programming. The goals of systems integration include improving product performance and quality as well as reducing response times and operational costs. The importance of system integration has increased with the need for greater conductivity between systems, especially through the internet.
The methods of systems integration may generally be classified into vertical, horizontal and star integration.
Vertical integration is the process of integrating subsystems based on their functionality. This approach involves creating functional entities, or silos, which can’t be reused for other functionalities. Vertical integration is the least costly approach to systems integration in the short term, since it only involves vendors that are essential for each silo. However, the total cost of ownership (TCO) is higher for vertical integration, since scaling the system requires the implementation of more silos.
Horizontal integration uses a subsystem known as the Enterprise Service Bus (ESB) to communicate with the other subsystems. This approach means that each subsystem other than the ESB only requires one interface, which reduces costs and increases flexibility when the number of subsystems is large. Horizontal integration can easily replace subsystems with similar functionality just by implementing the interface of the new subsystem with the ESB.
Star integration interconnects each subsystem to every other subsystem, which provides the greatest flexibility in reusing functionality. The cost of this integration method is highly dependent on the heterogeneity of the interfaces. Furthermore, the cost of adding subsystems is also high due to the number of interfaces.
Software developers often provide a variety of specific services depending on their areas of specialization. This typically includes the development of custom software when COTS software is unable to meet the user’s requirements. Some of these services deal specifically with developing code on various platforms such as desktops, cloud platforms and mobile devices.
Clients may also require a software developer to use a particular methodology or set of tools to create their applications. These constraints may be mandated by the client’s own policies or other sources such as the end user, industry best practices or government regulations. Other services like QA deal with completed applications that have already been developed by the client or a third party. In these cases, the client often needs an independent party to verify the completed work.