IMA modularity simplifies the development process of avionics software :
* As the structure of the modules network is unified, it is mandatory to use a common API to access the hardware and network resources, thus simplifying the hardware and software integration.
* IMA concept also allows the Application developers to focus on the Application layer, reducing the risk of defaults in the lower-level software layers.
* As modules often share an extensive part of their hardware and lower-level software architecture, maintenance of the modules is easier than with previous specific architectures.
* Applications can be reconfigured on spare modules if the module that support them is detected faulty during operations, increasing the overall disponibility of the avionics functions.
Communication between the modules can use an internal high speed Computer bus, or can share an external network, as of ARINC 429 or AFDX.
It must be noted that there is no overall hardware or software standard that defines all the mandatory components used in an IMA architecture. However, parts of the API involved in an IMA network has been standardized, such as:
* ARINC 653 for the software avionics partitioning constraints to the underlying Real-time operating system (RTOS), and the associated API
* AFDX for the data network bus.
Examples of IMA architecture
Examples of aircraft avionics that uses IMA architecture :
* Rafale : Thales IMA architecture is called MDPU (Modular Data Processing Unit) 
* F-22 Raptor
* Airbus A380 
* Boeing 787 : GE Aviation Systems (formerly Smiths Aerospace) IMA architecture is called Common Core System 
* Dassault Falcon 900, Falcon 2000, and Falcon 7X : Honeywell’s IMA architecture is called MAU (Modular Avionics Units), and the overall platform is called EASy
* Sukhoi Superjet 100
* ATR 42
* ATR 72
* Airbus A350
Avionics software is embedded software with legally-mandated safety and reliability concerns used in avionics. The main difference between avionic software and conventional embedded software is that the development process is required by law and is optimized for safety.
Interestingly, some claim that the process described below is only slightly slower and more costly (perhaps 15 percent) than the normal ad-hoc processes used for commercial software. Since most software fails because of mistakes, eliminating the mistakes at the earliest possible step is also a relatively inexpensive, reliable way to produce software. In some projects, however, mistakes in the specifications may not be detected until deployment. At that point, they can be very expensive to fix.
The basic idea of any software development model is that each step of the design process has outputs called “deliverables.” If the deliverables are tested for correctness and fixed, then normal human mistakes can’t easily grow into dangerous or expensive problems. Most manufacturers follow the waterfall model to coordinate the design product, but almost all explicitly permit earlier work to be revised. The result is more often closer to a spiral model.
For an overview of embedded software see embedded system and software development models. The rest of this article assumes familiarity with that information, and discusses differences from commercial embedded systems and commercial development models.
The main difference between avionics software and other embedded systems is that the actual standards are often far more detailed and rigorous than commercial standards, usually described by documents with hundreds of pages.
Since the process is legally required, most processes have documents or software to trace requirements from numbered paragraphs in the specifications and designs to exact pieces of code, with exact tests for each, and a box on the final certification checklist. This is specifically to prove conformance to the legally mandated standard.
Deviations from a specific project to the processes described here can occur due to usage of alternative methods or low safety level requirements.
Almost all software development standards describe how to perform and improve specifications, designs, coding, and testing (See software development model). However avionics software development standards add some steps to the development for safety and certification:
Projects with substantial human interfaces are usually prototyped or simulated. The video tape are usually retained, but the prototype retired immediately after testing, because otherwise senior management and customers can believe the system is complete. A major goal is to find human-interface issues that can affect safety and usability
Safety-critical avionics usually have a hazard analysis. Very early in the project, there’s usually at least some vague idea of the main parts of the project. An engineer then takes each block of a block diagram and considers the things that could go wrong with that block. Then the severity and probability of the hazards are estimated. The problems then become requirements that feed into the design’s specifications.
Projects involving military cryptographic security usually include a security analysis, using methods very like the hazard analysis.
As soon as the engineering specification is complete, writing the maintenance manual can start. A maintenance manual is essential to repairs, and of course, if the system can’t be fixed, it won’t be safe.
There are several levels to most standards. A low-safety product such as an in-flight entertainment unit (a flying TV) may escape with a schematic and procedures for installation and adjustment. A navigation system, autopilot or engine may have thousands of pages of procedures, inspections and rigging instructions. Documents are now (2003) routinely delivered on CD-ROM, in standard formats that include text and pictures.
One of the odder documentation requirements is that most commercial contracts require an assurance that system documentation will be available indefinitely. The normal commercial method of providing this assurance is to form and fund a small foundation or trust. The trust then maintains a mailbox and deposits copies (usually in ultrafiche) in a secure location, such as rented space in a university’s library (managed as a special collection), or (more rarely now) buried in a cave or a desert location.
Design and specification documents
These are usually much like those in other software development models. A crucial difference is that requirements are usually traced as described above. In large projects, requirements-traceability is such a large expensive task that it pays for large expensive computer programs to manage it.
Code production and review
The code is written, then usually reviewed by a programmer (or group of programmers) that didn’t write it originally (another legal requirement). Special organizations also usually conduct code reviews with a checklist of possible mistakes. When a new type of mistake is found it’s added to the checklist, and fixed throughout the code.
The code is also often examined by special programs that analyze correctness (Static code analysis), such as SPARK examiner for the SPARK (a subset of the Ada programming language) or lint for the C-family of programming languages (primarily C, though). The compilers or special checking programs like “lint” check to see if types of data are compatible with the operations on them, also such tools are regularly used to enforce strict usage of valid programming language subsets and programming styles. Another set of programs measure software metrics, to look for parts of the code that are likely to have mistakes. All the problems are fixed, or at least understood and double-checked.
Some code, such as digital filters, graphical user interfaces and inertial navigation systems, are so well-understood that software tools have been developed to write the software. In these cases, specifications are developed and reliable software is produced automatically.
“Unit test” code is written to exercise every instruction of the code at least once to get 100% code coverage. A “coverage” tool is often used to verify that every instruction is executed, and then the test coverage is documented as well, for legal reasons.
This test is among the most powerful. It forces detailed review of the program logic, and detects most coding, compiler and some design errors. Some organizations write the unit tests before writing the code, using the software design as a module specification. The unit test code is executed, and all the problems are fixed.
As pieces of code become available, they are added to a skeleton of code, and tested in place to make sure each interface works. Usually the built-in-tests of the electronics should be finished first, to begin burn-in and radio emissions tests of the electronics.
Next, the most valuable features of the software are integrated. It is very convenient for the integrators to have a way to run small selected pieces of code, perhaps from a simple menu system.
Some program managers try to arrange this integration process so that after some minimal level of function is achieved, the system becomes deliverable at any following date, with increasing amounts of features as time passes.
Black box and acceptance testing
Meanwhile, the test engineers usually begin assembling a test rig, and releasing preliminary tests for use by the software engineers. At some point, the tests cover all of the functions of the engineering specification. At this point, testing of the entire avionic unit begins. The object of the acceptance testing is to prove that the unit is safe and reliable in operation.
The first test of the software, and one of the most difficult to meet in a tight schedule, is a realistic test of the unit’s radio emissions. This usually must be started early in the project to assure that there is time to make any necessary changes to the design of the electronics.
Each step produces a deliverable, either a document, code, or a test report. When the software passes all of its tests (or enough to be sold safely), these are bound into a certification report, that can literally have thousands of pages. The designated engineering representative, who has been striving for completion, then decides if the result is acceptable. If it is, he signs it, and the avionic software is certified.
At this point, the software is usually very good software by any measurement.