En 50128 standard pdf
When CCF's do not occur exactly at the same time, precautions can be taken by means of comparison methods between the redundant chains which should lead to detection of a failure before this failure is common to all chains. CCF analysis should take this technique into account. Data Flow Analysis combines the information obtained from the control flow analysis with information about which variables are read or written in different portions of code.
The analysis can check for. This is very likely to be an error, and is certainly bad programming practice, variables that are written more than once without being read. This could indicate omitted code, variables that are written but never read. This could indicate redundant code.
There is an extension of data flow analysis known as information flow analysis, where the actual data flows both within and between procedures are compared with the design intent.
This is normally implemented with a computerised tool where the intended data flows are defined using a structured comment that can be read by the tool. Data Flow Diagrams document how data input is transformed to output, with each stage in the diagram representing a distinct transformation.
Data flow diagrams describe how an input is transformed to an output. They do not, and should not, include control information or sequencing information. Each bubble can be considered as a stand alone black box which, as soon as its inputs are available, transforms them to its outputs. One of the principle advantages of data flow diagrams is that they show transformations without making any assumptions about how these transformations are implemented. The preparation of data flow diagrams is best approached by considering system inputs and working towards system outputs.
Each bubble shall represent a distinct transformation - its output should, in some way, be different from its input.
There are no rules for determining the overall structure of the diagram and constructing a data flow diagram is one of the creative aspects of system design. Like all design, it is an iterative process with early attempts refined in stages to produce the final diagram. To facilitate software process improvement by recording, validating and analysing relevant data from individual projects and persons.
The relevance of the data is determined by the strategic goals of the organisation. The goals may be directed towards the evaluation of a particular software development method relative to the claims for it, e. Data recording and analysis constitutes an essential part of software process improvement.
The recording of valid data represents an important part of learning more about the software development process and to evaluate alternative software development methods.
Detailed records are maintained during a project, both on a project and individual basis. For instance, an engineer would be required to keep records which could include. During and at the conclusion of the project these records can be analysed to establish a wide variety of information.
In particular data recording is very important for the maintenance of computer systems as the rationale for certain decisions made during the development project is not always known by the maintenance engineers. Due to poor planning, data recording often tends to be over-volumed and out of focus.
This can be avoided by following the principle that data recording should be driven by goals, questions, and metrics of relevance to what is strategically important to the organisation. In order to achieve the desired accuracy, the data recording and validation process should proceed concurrently with the development, e. To provide a clear and coherent specification and analysis of complex logical combinations and relationships. These related methods use two dimensional tables to concisely describe logical relationships between Boolean program variables.
The conciseness and tabular nature of both methods make them appropriate as a means of analysing complex logical combinations expressed in code. Both methods are potentially executable if used as specifications. To produce programs which detect anomalous control flow, data flow or data values during their execution and react to these in a predetermined and acceptable manner. Many techniques can be used during programming to check for control or data anomalies.
These can be applied systematically throughout the programming of a system to decrease the likelihood of erroneous data processing. Two overlapping areas of defensive techniques can be identified.
Intrinsic error-safe software is designed to accommodate its own design shortcomings. These shortcomings may be due to plain error of design or coding, or to erroneous requirements. The following lists some of the defensive techniques:. These first three recommendations help to ensure that the numbers manipulated by the program are reasonable, both in terms of the program function and physical significance of the variables.
Read-only and read-write parameters should be separated and their access checked. Functions should treat all parameters as read-only. Literal constants should not be write-accessible. This helps detect accidental overwriting or mistaken use of variables. Error tolerant software is designed to 'expect' failures in its own environment or use outside nominal or expected conditions, and behave in a predefined manner. Techniques include the following:.
This could include both the existence and accessibility of expected hardware and also that the software itself is complete. This is particularly important for maintaining integrity after maintenance procedures.
Some of the defensive programming techniques such as control flow sequence checking, also cope with external failures. To ensure a uniform layout of the design documents and the produced code, enforce consistent programming and to enforce a standard design method which avoids errors. Coding Standards are rules and restrictions on a given programming language to avoid potential faults which can be made when using that language.
Style guidelines deal with issues such as formatting and naming conventions, and although it can be highly subjective, more than anything style affects the readability of your code.
The establishment of a common and consistent style for a project will facilitate understanding and maintenance of code developed by more than one programmer, and will make it easier for several people to cooperate in the development of the same program.
Detect and mask residual software design faults during execution of a program, in order to prevent safety critical failures of the system, and to continue operation for high reliability. In diverse programming a given program specification is implemented N times in different ways.
The same input values are given to the N versions, and the results produced by the N versions are compared. If the result is considered to be valid, the result is transmitted to the computer outputs. The N versions can run in parallel on separate computers, alternatively all versions can be run on the same computer and the results subjected to an internal vote.
Different voting strategies can be used on the N versions depending on the application requirements. If the system has a safe state, then it is feasible to demand complete agreement all N agree otherwise a fail-safe output value is used.
For simple trip systems the vote can be biased in the safe direction. In this case the safe action would be to trip if either version demanded a trip. For systems with no safe state, majority voting strategies can be employed. For cases where there is no collective agreement, probabilistic approaches can be used in order to maximise the chance of selecting the correct value, for example, taking the middle value, temporary freezing of outputs until agreement returns etc.
This technique does not eliminate residual software design faults, but it provides a measure to detect and mask before they can affect safety. Unfortunately, experiments and analytical studies show that N-version programming is not always as effective as desired. Even if different algorithms are used, diverse software versions too often fail on the same inputs. Two alternatives to N-version programming are design diversity and functional diversity.
Design diversity involves the use of multiple components, each designed in a different way but implementing the same function. Functional diversity involves solving the same problem in functionally different ways. Irrespective of the approach, no effective method to assess the level of diversity is currently available. The logical architecture of the system has to be such that it can be mapped onto a subset of the available resources of the system.
The architecture needs to be capable of detecting a failure in a physical resource and then remapping the logical architecture back onto the restricted resources left functioning. Although the concept is more traditionally restricted to recovery from failed hardware units, it is also applicable to failed software units if there is sufficient 'run-time redundancy' to allow a software re-try or if there is sufficient redundant data to isolate the failure.
Although traditionally applied to hardware, this technique is being developed for application to software and, thus, the total system. It shall be considered at the first system design stage.
To test the software adequately using a minimum of test data. The test data is obtained by selecting the partitions of the input domain required to exercise the software. This testing strategy is based on the equivalence relation of the inputs, which determines a partition of the input domain. Test cases are selected with the aim of covering all subsets of this partition.
At least one test case is taken from each equivalence class. The interpretation of the specification may be either input oriented, for example the values selected are treated in the same way or output oriented, for example the set of values leading to the same functional result, and equivalence classes may be defined on the internal structure of the program.
In this case the equivalence class results are determined from static analysis of the program, for example the set of values leading to the same path being executed. For an information of n bits, a coded block of k bits is generated which enables errors to be detected and corrected.
Different types of code include:. Testing experience and intuition combined with knowledge and curiosity about the system under test may add some uncategorised test cases to the designed test case set. Special values or combinations of values may be error-prone. Some interesting test cases may be derived from inspection checklists. It may also be considered whether the system is robust enough. Can the buttons be pushed on the front-panel too fast or too often?
What happens if two buttons are pushed simultaneously? Some known error types are inserted in the program, and the program is executed with the test cases under test conditions. If only some of the seeded errors are found, the test case set is not adequate. The ratio of found seeded errors to the total number of seeded errors is an estimate of the ratio of found real errors to total number errors.
This gives a possibility of estimating the number of remaining errors and thereby the remaining test effort. Found seeded errors Found real errors. Total number of Total number of seeded errors real errors. The detection of all the seeded errors may indicate either that the test case set is adequate, or that the seeded errors were too easy to find. The limitations of the method are that, in order,to obtain any usable results, the error types as well as the seeding positions shall reflect the statistical distribution of real errors.
If error seeding is used, the location of all errors shall be recorded, and the validator shall ensure that all seeded errors have been removed before the software release. To model, in a diagrammatic form, the sequence of events that can develop in a system after an initiating event, and thereby indicate how serious consequences can occur.
On the top of the diagram is written the sequence conditions that are relevant in the development following the initiating event which is the target of the analysis.
Starting under the initiating event, one draws a line to the first condition in the sequence. There the diagram branches off into a 'yes' and a 'no' branch, describing how the future developments depend on the condition.
For each of these branches, one continues to the next condition in a similar way. Not all conditions are, however, relevant for all branches. One continues to the end of the sequence, and each branch of the tree constructed in this way represents a possible consequence. The event tree can be used to compute the probability of the various consequences based on the probability and number of conditions in the sequence.
A 'formal' audit on quality assurance documents aimed at finding errors and omissions. The inspection process consists of five phases; Planning, Preparation, Inspection, Rework and Follow up.
Each of these phases has its own separate objective. The complete system development specification, design, coding and testing shall be inspected. The assertion programming method follows the idea of checking a pre-condition before a sequence of statements is executed, the initial conditions are checked for validity and a post-condition results are checked after the execution of a sequence of statements.
If either the pre-condition or the post-condition is not fulfilled, the processing stops with an error. To identify software components, their criticality; to propose means for detecting software errors and enhancing software robustness; to evaluate the amount of validation needed on the various software components.
Determination of the depth of the analysis at the level of a single instruction line, a group of instructions, a component, etc. The synthesis identifies the remaining unsafe scenarios and the validation effort needed given the criticality of each module. SEEA, being an in-depth analysis carried out by an independent team, is a powerful bug-finding method.
To detect faults in a system, which might lead to a failure, thus providing the basis for countermeasures in order to minimise the consequences of failures. Fault detection is the process of checking a system for erroneous states which are caused, as explained before, by a fault within the sub system to be checked. The primary goal of fault detection is to inhibit the effect of wrong results.
A system which delivers either correct results, or no results at all, is called "self checking". Fault detection is based on the principles of redundancy mainly to detect hardware faults and diversity software faults. Some sort of voting is needed to decide on the correctness of results.
Fault detection may be achieved by checks in the value domain or in the time domain on different levels, especially on the physical temperature, voltage etc. The results of these checks may be stored and associated with the data affected to allow failure tracking. Complex systems are composed of subsystems. The efficiency of fault detection, diagnosis and fault compensation depends on the complexity of the interactions among the subsystems, which influences the propagation of faults.
Fault diagnosis isolates the smallest subsystem that may be identified. Smaller subsystems allow a more detailed diagnosis of faults identification of erroneous states. Many systems can be defined in terms of their states, their inputs, and their actions. Thus when in state S1, on receiving input I a system might carry out action A and move to state S2. By defining a system's actions for every input in every state we can define a system completely.
It is often drawn as a so-called state transition diagram showing how the system moves from one state to another, or as a matrix in which the dimensions are state and input and the matrix cells contain the action and new state resulting from receipt of the input in the given state. Where a system is complicated or has a natural structure this can be reflected in a layered Finite State Machine.
These are important properties for critical systems and they can be checked. Tools to support these checks are easily written. Algorithms also exist that allow the automatic qeneration of test cases for verifying a Finite State Machine implementation or for animating a Finite State Machine model.
Several extensions of basic FSMs have been devised to improve the description of complex system behaviour. So called statecharts add hierarchy, composition parallelism , inter-level transitions, history states, etc. A particularly useful feature is the nesting of internal states and transitions, giving the possibility to reveal or conceal the internal states at need. The value of formal methods is that they provide a means to symbolically examine the entire state space of a dlqltal design whether hardware or software and establish a correctness or safety property that is true for all possible inputs.
However, this is rarely done in practice today except for the critical components of safety critical systems because of the enormous complexity of real systems. Several approaches are used to overcome the astronomically-sized state spaces associated with real systems:.
Although the use of mathematical logic Is a unifying theme across the discipline of formal methods, there is no single best "formal method". Each application domain requires different modelling methods and different proof approaches.
Furthermore, even within a particular application domain, different phases of the life-cycle may be best served by different tools and techniques. For example a theorem prover might be best used to analyse the correctness of a register transfer Ievel description of a Fast Fourier Transform circuit, whereas algebraic derivational methods might best be used to analyse the correctness of the design refinements into a gate-level design.
Therefore there are a large number of formal methods under development throughout the world. Several examples of Formal Methods are described in the following subclauses of this bibliography. The list of examples here is not exhaustive. CSP is a technique for the specification of concurrent software systems, i. CSP provides a language for the specification of systems of processes and proof for verifying that the implementation of processes satisfies their specifications described as a trace - permissible sequences of events.
A system is modelled as a network of independent processes. Each process is described in terms of all of its possible behaviours. A system is modelled by composing processes sequentially or in parallel. Processes can communicate synchronise or exchange data via channels, the communication only taking place when both processes are ready.
The relative timing of events can be modelled. The theory behind CSP was directly incorporated into the architecture of the lnmos transputer", and the occam language 2l allows a CSP-specified system to be directly implemented on a network of transputers. CCS is a means for describing and reasoning about the behaviour of systems of concurrent, communicating processes.
It is the native programming language of the In mos transputer. The system design is modelled as a network of independent processes operating sequentially or in parallel.
Processes can communicate via ports similar to CSP's channels , the communication is only taking place when both processes are ready. Non-determinism can be modelled. Starting from a high-level abstract description of the entire system known as a trace , it is possible to carry out a step-wise refinement of the system into a composition of communicating processes whose total behaviour is that required of the whole system.
Equally, it is possible to work in a bottom up fashion, combining processes and deducing the properties of the resulting system using inference rules related to the composition rules. HOL Higher Order Logic refers to a particular logic notation and its machine support system, both of which were developed at the University of Cambridge Computer Laboratory. LOTOS is a means for describing and reasoning about the behaviour of systems of concurrent, communicating processes.
It overcomes the weakness of CCS in the handling of data structures and value expressions by combining it with a second component based on the abstract data type language ACT ONE. The process definition component of LOTOS could, however, be used with other formalisms for the description of abstract data types. To provide a precise system specification with user feed-back and system validation prior to implementation.
OBJ is an algebraic specification language. Users specify requirements in terms of algebraic equations. The behavioural, or constructive, aspects of the system are specified in terms of operations acting on abstract data types ADT.
An ADT is like an Ada 3J package where the operator behaviour is visible whilst the implementation details are 'hidden'. An OBJ specification, and subsequent step-wise implementation, is amenable to the same formal proof techniques as other formal approaches.
Moreover, since the constructive aspects of the OBJ specification are machine-executable, it is straightforward to achieve system validation from the specification itself. Execution is essent! This executabiljty allows end-users of the envisaged system to gain a 'view' of the eventual system at the system specification stage without the need to be familiar with the underlying formal specification techniques.
As with all other ADT techniques, OBJ is only applicable to sequential systems, or to sequential aspects of concurrent systems. OBJ has been widely used for the specification of both small and large-scale industrial applications. Direct expression of safety and operational requirements and formal demonstration that these properties are preserved in the subsequent development steps. Standard First Order Predicate Logic contains no concept of time.
Temporal logic extends First Order logic by adding modal operators e. These operators can be used to qualify assertions about the system.
For example, safety properties might be required to hold 'henceforth', whilst other desired system states might be required to be attained 'eventually' from some other initiating state.
Temporal formulas are interpreted on sequences of states behaviours. What constitutes a 'state' depends on the chosen level of description. It can refer to the whole system, a system component or the computer program. Quantified time intervals and constraints are not handled explicitly in temporal logic. Absolute timing has to be handled by creating additional time states as part of the state definition.
VDM is a mathematically based specification technique and a technique for refining implementations in a way that allows proof of their correctness with respect to the specification. The specification technique is model-based in that the system state is modelled in terms of set-theoretic structures on which are defined invariants predicates , and operations on that state are modelled by specifying their pre and post-conditions in terms of the system state.
Operations can be proved to preserve the system invariants. The implementation of the specification is done by the reification of the system state in terms of data structures in the target language and by refinement of the operations in terms of a program in the target language.
Reification and refinement steps give rise to proof obligations that establish their correctness. Whether or not these obligations are carried out is a choice made by the designer. VDM is principally used in the specification stage but can be used in the design and implementation stages leading to source code. It can only be applied to sequential programs or the sequential processes in concurrent systems.
Z is a specification language notation for sequential systems and a design technique that allows the designer to proceed from a Z specification to executable algorithms in a way that allows proof of their correctness with respect to the specification. Z is principally used in the specification stage but a method has been devised to go from specification into a design and an implementation.
It is best suited to the development of data oriented, sequential systems. Like VDM, the specification technique is model-based in that the system state is modelled in terms of set-theoretic structures on which are defined invariants predicates , and operations on that state are modelled by specifying their pre and post-conditions in terms of the system state. Operations can be proved to preserve the system invariants thereby demonstrating their consistency.
The formal part of a specification is divided into schemas which allow the structuring of specifications through refinement. Typically, a Z specification is a mixture of formal Z and informal explanatory text in natural language. Formal text on its own can be too terse for easy reading and often its purpose needs to be explained, while the informal natural language can easily become vague and imprecise.
Unlike VDM, Z is a notation rather than a complete method. However an associated method called B has been developed which can be used in conjunction with Z. Like VDM, the purpose of B method is to model formally a system or software and to prove that the behaviour of the system or software respects the properties that were made explicit during modelling.
The B modelling calls on mathematical items from the Set theory. On one hand, invariants i. On the other hand, operations establish post-conditions, thus defining its dynamic behaviour. The specification of a complex system or software is made possible by decomposing the model into "machines" tied together by links of different semantics. Two main categories of modelling with B formalism can be distinguished. The former historically the first , aims at developing software: in this case, the goal is to produce a program that respects its specification.
The model consists of abstract machines not necessarily deterministic and step-by-step refinements of these machines, leading to deterministic implementations written in a pseudo-code called "BO". This pseudo-code can then be automatically translated into a target programming language. The latter, aims at modelling systems and in this case we talk about "Event B": the purpose is to specify, without ambiguity and coherently, a system that fulfils explicit properties.
The model takes into account the system itself and its environment. The dynamics of the system is modelled by "events", and the refinement technique is used in order to precise interactions between the system and its environment. A set of Proof Obligations logical assertions that are to be formally proved from the hypothesis that were extracted from the B formal model is generated automatically. These Proof Obligations guarantee. Other Proof Obligations, for example verifying integer overflow or underflow, are also generated.
Model checking is the process of checking whether a given structure is a model of a given logical formula. The concept is general and applies to all kinds of logics and suitable structures. A simple model-checking problem is testing whether a given formula in the propositional logic is satisfied by a given structure. An important class of model checking methods have been developed to algorithmically verify formal systems. This is achieved by verifying if the structure, often derived from a hardware or software design, satisfies a formal specification, typically a temporal logic formula.
Model checking is most often applied to hardware designs. For software, because of undecidability see Computability theory the approach cannot be fully algorithmic; typically it may fail to prove or disprove a given property. The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite state machine, i.
A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transltions which may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution. Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula p, and a structure M with initial state s, decide if.
If M is finite, as it is in hardware, model checking reduces to a graph search. Using theoretical and mathematical models and rules it is possible to prove the correctness of a program or model without executing it.
A number of assertions are stated at various locations in the program, and they are used as pre and post conditions to various paths in the program. The proof consists of showing that the program transfers the preconditions into the post-conditions according to a set of logical rules, and that the program terminates.
The descriptions of these can be found under D. If a fault has been detected, the current state of the system is manipulated to obtain a state, which will be consistent some time later. This concept is especially suited for real-time systems with a small database and fast rate of change of the internal state.
It is assumed, that at least part of the system state may be imposed onto the environment, and only part of the system states are influenced forced by the environment. To maintain the more critical system functions available despite failures by dropping the less critical functions.
This technique gives priorities to the various functions to be carried out by the system. The design then ensures that should there be insufficient resources to carry out all the system functions, then the higher priority functions are carried out in preference to the lower ones. For example, error and event logging functions may have lower priority than system control functions, in which case system control would continue if the hardware associated with error logging were to fail.
Another example would be a signalling system where in the event of loss of communication with the control centre the local lineside equipment automatically sets the available routes for the direction taken by the highest priority traffic. This would be a graceful degradation because trains on the priority routes would be able to pass through the area affected by the loss of communication with the control centre, but other movements, such as shunting movements, would not be possible.
To identify the effect that a change or an enhancement to a software will have to other components in that software as well as to other systems. Prior to a modification or enhancement being performed on the software an analysis shall be undertaken to identify the impact of the modification or enhancement on the software and to also identify the affected software systems and components. After the analysis has been completed a decision is required concerning the reverification of the software system.
This depends on the number of components affected, the criticality of the affected components and the nature of the change.
The possible decisions are:. Data that is globally accessible to all software components can be accidentally or incorrectly modified by any of these components. Any changes to these data structures may require detailed examination of the code and extensive modifications.
Information hiding is a general approach for minimising these difficulties. The key data structures are 'hidden' and can only be manipulated through a defined set of access procedures. This allows the internal structures to be modified or further procedures to be added without affecting the functional behaviour of the remaining software.
For example, a named directory might have access procedures Insert, Delete and Find. The access procedures and internal data structures could be re-written e. This concept of an abstract data type is directly supported in a number of programming languages, but the basic principle can be applied whatever programming language is used.
To demonstrate that interfaces of subprograms do not contain any errors or any errors that lead to failures in a particular application of the software or to detect all errors that may be relevant. Several levels of detail or completeness of testing are feasible. The most important levels are testing. These tests are particularly important if their interfaces do not contain assertions that detect incorrect parameter values.
They are also important after new configurations of pre-existing subprograms have been generated. To reduce the probability of introducing programming faults and increase the probability of detecting any remaining faults. The language is examined to identify programming constructs which are either error-prone or difficult to analyse, for example, using static analysis methods.
A language subset is then defined which excludes these constructs. During licensing a record is made of all relevant details of each program execution. During normal operation each program execution is compared with the set of the licensed executions. If it differs, a safety action is taken. The execution record can be the sequence of the individual decision-to-decision paths DDpaths or the sequence of the individual accesses to arrays, records or volumes, or both.
Different methods of storing execution paths are possible. Hash-coding methods can be used to map the execution sequence onto a single large number or sequence of numbers. During normal operation the execution path value shall be checked against the stored cases before any output operation occurs. Since the possible combinations of decision-to-decision paths during one program is very large, it may not be feasible to treat programs as a whole.
In this case, the technique can be applied at component level. To predict the attributes of programs from properties of the software itself rather than from its development or test history. These models evaluate some structural properties of the software and relate this to a desired attribute such as complexity. Software tools are required to evaluate most of the measures. Some of the metrics which can be applied are given below:.
Graph Theoretic Complexity: this measure can be applied early in the lifecycle to assess trade-offs, and is based on the complexity of the program control graph, represented by its cyclomatic number; number of ways to activate a certain component accessibility : the more a component can be accessed, the more likely it is to be debugged; Halstead complexity measures: this measure computes the program length by counting the number of operators and operands.
Decomposition of a software into small comprehensible parts in order to limit the complexity of the software. A Modular Approach or modularisation contains several rules for the design, coding and maintenance phases of a software project. These rules vary according to the design method employed during design. Most methods contain the following rules:. To ensure that the working capacity of the system is sufficient to meet the specified requirements.
The requirements specification includes throughput and response requirements for specific functions, perhaps combined with constraints on the use of total system resources. The proposed system design is compared against the stated requirements by. For simple systems, an analytic solution may be possible whilst for more complex systems, some form of simulation is required to obtain accurate results.
Before detailed modelling, a simpler 'resource budget' check can be used which sums the resources requirements of all the processes. If the requirements exceed designed system capacity, the design is infeasible.
Even if the design passes this check, performance modelling may show that excessive delays and response times occur due to resource starvation.
To avoid this situation engineers often design systems to use some fraction e. An analysis is performed of both the system and the Software Requirements Specifications to identify all general and specific, explicit and implicit performance requirements.
The practicability of each performance requirement is then analysed in order to obtain a list of performance requirements, success criteria and potential measurements. The main objectives are:. To gain a quantitative figure about the reliability properties of the investigated software. This figure may address the related levels of confidence and significance and.
Probabilistic considerations are either based on a probabilistic test or on operating experience. Usually the number of tests cases of observed operating cases is very large. In order to facilitate testing, usually automatic aids are taken. They concern the details of test data provision and test output supervision.
Large tests are run on large host computers with the appropriate process simulation periphery. Test data is selected both according to systematic and random view points. The first concerns the overall test control, for example, guarantee a test data profile.
Samir Damani. Syed Muhammad Ali Omer. Ugljesa Milovic. Charan Tej Rudrala. Vishwakarma Vishwakarma. Kumar Purbee. Viorel Croitoru. Robert Comas Garcia. Venkatesh Shastry. Nelson Cheung. Luz Dary Carvajal Mendoza. Muhammad Younis Khan. Dr-Sameer Saad. More From Danial Zafar. Danial Zafar.
Dan Suciu]. Quick navigation Home. These should be addressed as an essential part of any contractual agreement. All the clauses of this European Standard will need careful consideration in any commercial situation. This European Standard is not intended to be retrospective. It therefore applies primarily to new developments and only applies in its entirety to existing systems if these are subjected to major modifications.
For minor changes, only 9. The assessor has to analyse the evidences provided in the software documentation to confirm whether the determination of the nature and scope of software changes is adequate. However, application of this European Standard during upgrades and maintenance of existing software is highly recommended. Definitief pagina's. E-mail of deel deze pagina.
Selecteer uw leverwijze Papier. In Winkelwagen Op verlanglijst. Deze norm altijd up-to-date met NEN Connect. Specificaties Geschiedenis Specificaties. Wie hebben er aan deze norm gewerkt? Interoperabliteit van het spoorwegsysteem alleen met normen.
0コメント