RFCs in HTML Format


RFC 1142

Network Working Group                                     D. Oran, Editor
Request for Comments: 1142                        Digital Equipment Corp.
                                                            February 1990


	        OSI IS-IS Intra-domain Routing Protocol
 
Status of this Memo

   This RFC is a republication of ISO DP 10589 as a service to the
   Internet community.  This is not an Internet standard.
   Distribution of this memo is unlimited.
	

NOTE:  This is a bad ASCII version of this document.  The official
document is the PostScript file, which has the diagrams in place.
Please use the PostScript version of this memo.


ISO/IEC DIS 10589
 
Information technology Telecommunications and information exchange
between systems Interme diate system to Intermediate system
Intra-Domain routeing exchange protocol for use in Conjunction with
the Protocol for providing the Connectionless- mode Network Service
(ISO 8473) Technologies de l'information Communication de donnies et
ichange d'information entre systhmes Protocole intra-domain de routage
d'un systhme intermediare ` un systhme intermediare ` utiliser
conjointement avec le protocole fournissant le service de riseau en
mode sans connexion (ISO 8473) UDC 00000.000 : 000.0000000000
Descriptors:
 
Contents
	Introduction		iv
	1 	Scope and Field of Application	1
	2 	References	1
	3 	Definitions	2
	4 	Symbols and Abbreviations 	3
	5 	Typographical Conventions	4
	6 	Overview of the Protocol	4
	7 	Subnetwork Independent Functions	9
	8 	Subnetwork Dependent Functions	35
	9 	Structure and Encoding of PDUs	47
	10 	System Environment	65
	11 	System Management 	67
	12 	Conformance	95
	Annex A 	PICS Proforma	99
	Annex B 	Supporting Technical Material	105
	Annex C 	Implementation Guidelines and Examples	109
	Annex D 	Congestion Control and Avoidance	115
 
Introduction

This Protocol is one of a set of International Standards produced to
facilitate the interconnection of open systems. The set of standards
covers the services and protocols re quired to achieve such
interconnection.  This Protocol is positioned with respect to other
related standards by the layers defined in the ISO 7498 and by the
structure defined in the ISO 8648. In particular, it is a protocol of
the Network Layer. This protocol permits Intermediate Systems within a
routeing Domain to exchange configuration and routeing information to
facilitate the operation of the route ing and relaying functions of
the Network Layer.  The protocol is designed to operate in close
conjunction with ISO 9542 and ISO 8473.  ISO 9542 is used to establish
connectivity and reachability between End Systems and Inter mediate
Systems on individual Subnetworks. Data is carried by ISO 8473.  The
related algo rithms for route calculation and maintenance are also
described.  The intra-domain ISIS routeing protocol is intended to
support large routeing domains consisting of combinations of many
types of subnetworks. This includes point-to-point links, multipoint
links, X.25 subnetworks, and broadcast subnetworks such as ISO 8802
LANs.  In order to support large routeing domains, provision is made
for Intra-domain routeing to be organised hierarchically. A large
domain may be administratively divided into areas.  Each system
resides in exactly one area. Routeing within an area is referred to as
Level 1 routeing. Routeing between areas is referred to as Level 2
routeing.  Level 2 Intermediate systems keep track of the paths to
destination areas. Level 1 Intermediate systems keep track of the
routeing within their own area. For an NPDU destined to another area,
a Level 1 Intermediate system sends the NPDU to the nearest level 2 IS
in its own area, re gardless of what the destination area is. Then the
NPDU travels via level 2 routeing to the destination area, where it
again travels via level 1 routeing to the destination End System.
 
Information technology

Telecommunications and information exchange between systems
Intermediate system to Intermediate system Intra-Domain routeing
exchange protocol for use in Conjunction with the Protocol for
providing the Connectionless-mode Network Service (ISO 8473)
 
1 Scope and Field of Application

This International Standard specifies a protocol which is used by
Network Layer entities operating ISO 8473 in In termediate Systems to
maintain routeing information for the purpose of routeing within a
single routeing domain. The protocol herein described relies upon the
provision of a connectionless-mode underlying service.11See ISO 8473
and its Addendum 3 for the mechanisms necessary to realise this
service on subnetworks based on ISO 8208, ISO 8802, and the OSI Data
Link Service.
 
This Standard specifies: 

a)procedures for the transmission of configuration and 
routeing information between network entities resid
ing in Intermediate Systems within a single routeing 
domain; 

b)the encoding of the protocol data units used for the 
transmission of the configuration and routeing infor
mation; 

c)procedures for the correct interpretation of protocol 
control information; and 

d)the functional requirements for implementations 
claiming conformance to this Standard.

The procedures are defined in terms of: 

a)the interactions between Intermediate system Network 
entities through the exchange of protocol data units; 
and 

b)the interactions between a Network entity and an un
derlying service provider through the exchange of 
subnetwork service primitives.

c)the constraints on route determination which must be 
observed by each Intermediate system when each has 
a routeing information base which is consistent with 
the others.
 
2 References

2.1  Normative References

The following standards contain provisions which, through reference in
this text, constitute provisions of this Interna tional Standard.  At
the time of publication, the editions in dicated were valid. All
standards are subject to revision, and parties to agreements based on
this International Stan dard are encouraged to investigate the
possibility of apply ing the most recent editions of the standards
listed below.  Members of IEC and ISO maintain registers of currently
valid International Standards.  ISO 7498:1984, Information processing
systems Open Systems Interconnection Basic Reference Model.  ISO
7498/Add.1:1984, Information processing systems Open Systems
Interconnection Basic Reference Model Addendum 1: Connectionless-mode
Transmission.  ISO 7498-3:1989, Information processing systems Open
Systems Interconnection Basic Reference Model Part 3: Naming and
Addressing.  ISO 7498-4:1989, Information processing systems Open
Systems Interconnection Basic Reference Model Part 4: Management
Framework.  ISO 8348:1987, Information processing systems Data
communications Network Service Definition.  ISO 8348/Add.1:1987,
Information processing systems Data communications Network Service
Definition Addendum 1: Connectionless-mode transmission.  ISO
8348/Add.2:1988, Information processing systems Data communications
Network Service Definition Addendum 2: Network layer addressing.  ISO
8473:1988, Information processing systems Data communications Protocol
for providing the connectionless-mode network service.  ISO
8473/Add.3:1989, Information processing systems Telecommunications and
information exchange between
systems  Protocol for providing the connectionless-
mode network service  Addendum 3: Provision of the 
underlying service assumed by ISO 8473 over 
subnetworks which provide the OSI data link service.
ISO 8648:1988,  Information processing systems  Open 
Systems Interconnection  Internal organisation of the 
Network Layer.
ISO 9542:1988, Information processing systems  Tele
communications and information exchange between sys
tems  End system to Intermediate system Routeing ex
change protocol for use in conjunction with the protocol 
for providing the connectionless -mode network service 
(ISO 8473).
ISO 8208:1984, Information processing systems  Data 
communications  X.25 packet level protocol for Data 
terminal equipment
ISO 8802:1988, Information processing systems  Tele
communications and information exchange between sys
tems  Local area networks.
ISO/TR 9575:1989, Information technology   Telecom
munications and information exchange between systems 
 OSI Routeing Framework.
ISO/TR 9577:1990, Information technology   Telecom
munications and information exchange between systems 
 Protocol Identification in the Network Layer.
ISO/IEC DIS 10165-4:, Information technology  Open 
systems interconnection  Management Information Serv
ices  Structure of Management Information Part 4: 
Guidelines for the Definition of Managed Objects.
ISO/IEC 10039:1990, IPS-T&IEBS  MAC Service Defini
tion.

2.2 Other References

The following references are helpful in describing some of 
the routeing algorithms: 

McQuillan, J. et. al., The New Routeing Algorithm for the 
ARPANET, IEEE Transactions on Communications, May 
1980

Perlman, Radia, Fault-Tolerant Broadcast of Routeing In
formation, Computer Networks, Dec. 1983. Also in IEEE 
INFOCOM 83, April 1983.

Aho, Hopcroft, and Ullman, Data Structures and Algo
rithms, P204208  Dijkstra algorithm.
 
3 Definitions

3.1 Reference Model definitions

This International Standard  makes use of the following 
terms defined in ISO 7498:

a)Network Layer
b)Network Service access point
c)Network Service access point address
d)Network entity
e)Routeing
f)Network protocol
g)Network relay
h)Network protocol data unit

3.2 Network Layer architecture 
definitions

This International Standard makes use of the following 
terms defined in ISO 8648:


a)Subnetwork
b)End system
c)Intermediate system
d)Subnetwork service
e)Subnetwork Access Protocol 
f)Subnetwork Dependent Convergence Protocol 
g)Subnetwork Independent Convergence Protocol

3.3 Network Layer addressing 
definitions

This International Standard makes use of the following 
terms defined in ISO 8348/Add.2:


a)Subnetwork address
b)Subnetwork point of attachment
c)Network Entity Title
3.4 Local Area Network Definitions
 This International Standard makes use of the following 
terms defined in ISO 8802: 
a)Multi-destination address 
b)Media access control
c)Broadcast medium
3.5 Routeing Framework Definitions
 This document makes use of the following terms defined in 
ISO/TR 9575: 
a)Administrative Domain 
b)Routeing Domain 
c)Hop 
d)Black hole 
 
 
3.6 Additional Definitions
For the purposes of this International Standard, the follow
ing definitions apply: 
3.6.1   
Area: A routeing subdomain which maintains de
tailed routeing information about its own internal 
composition, and also maintains routeing informa
tion which allows it to reach other routeing subdo
mains. It corresponds to the Level 1 subdomain. 
3.6.2   
Neighbour: An adjacent system reachable by tra
versal of a single subnetwork by a PDU. 
3.6.3   
Adjacency: A portion of the local routeing infor
mation which pertains to the reachability of a sin
gle neighbour ES or IS over a single circuit.
Adjacencies are used as input to the Decision Proc
ess for forming paths through the routeing domain.
A separate adjacency is created for each neighbour 
on a circuit, and for each level of routeing (i.e. 
level 1 and level 2) on a broadcast circuit.
3.6.4   
Circuit: The subset of the local routeing informa
tion base pertinent to a single local SNPA. 
3.6.5   
Link: The communication path between two 
neighbours. 
A Link is up when communication is possible 
between the two SNPAs.
3.6.6   
Designated IS: The Intermediate system on a 
LAN which is designated to perform additional du
ties. In particular it generates Link State PDUs on 
behalf of the LAN, treating the LAN as a 
pseudonode. 
3.6.7   
Pseudonode: Where a broadcast subnetwork has n 
connected Intermediate systems, the broadcast 
subnetwork itself is considered to be a 
pseudonode. 
The pseudonode has links to each of the n Interme
diate systems and each of the ISs has a single link 
to the pseudonode (rather than n-1 links to each of 
the other Intermediate systems). Link State PDUs 
are generated on behalf of the pseudonode by the 
Designated IS. This is depicted below in figure 1.
3.6.8   
Broadcast subnetwork: A subnetwork which sup
ports an arbitrary number of End systems and In
 
termediate systems and additionally is capable of 
transmitting a single SNPDU to a subset of these 
systems in response to a single SN_UNITDATA 
request.  
3.6.9   
General topology subnetwork: A subnetwork 
which supports an arbitrary number of End sys
tems and Intermediate systems, but does not sup
port a convenient multi-destination connectionless 
trans

mission facility, as does a broadcast sub

net

work. 3.6.10 Routeing Subdomain: a set of Intermediate sys tems and End systems located within the same Routeing domain. 3.6.11 Level 2 Subdomain: the set of all Level 2 Inter mediate systems in a Routeing domain. 4 Symbols and Abbreviations 4.1 Data Units PDU Protocol Data Unit SNSDU Subnetwork Service Data Unit NSDU Network Service Data Unit NPDU Network Protocol Data Unit SNPDU Subnetwork Protocol Data Unit 4.2 Protocol Data Units ESH PDU ISO 9542 End System Hello Protocol Data Unit ISH PDU ISO 9542 Intermediate System Hello Protocol Data Unit RD PDU ISO 9542 Redirect Protocol Data Unit IIH Intermediate system to Intermediate system Hello Protocol Data Unit LSP Link State Protocol Data Unit SNP Sequence Numbers Protocol Data Unit CSNP Complete Sequence Numbers Protocol Data Unit PSNP Partial Sequence Numbers Protocol Data Unit 4.3 Addresses AFI Authority and Format Indicator DSP Domain Specific Part IDI Initial Domain Identifier IDP Initial Domain Part NET Network Entity Title NSAP Network Service Access Point SNPA Subnetwork Point of Attachment 4.4 Miscellaneous DA Dynamically Assigned DED Dynamically Established Data link DTE Data Terminal Equipment ES End System IS Intermediate System L1 Level 1 L2 Level 2 LAN Local Area Network MAC Media Access Control NLPID Network Layer Protocol Identifier PCI Protocol Control Information QoS Quality of Service SN Subnetwork SNAcP Subnetwork Access Protocol SNDCP Subnetwork Dependent Convergence Protocol SNICP Subnetwork Independent Convergence Proto col SRM Send Routeing Message SSN Send Sequence Numbers Message SVC Switched Virtual Circuit 5 Typographical Conventions This International Standard makes use of the following ty pographical conventions: a)Important terms and concepts appear in italic type when introduced for the first time; b)Protocol constants and management parameters appear in sansSerif type with multiple words run together. The first word is lower case, with the first character of subsequent words capitalised; c)Protocol field names appear in San Serif type with each word capitalised. d)Values of constants, parameters, and protocol fields appear enclosed in double quotes. 6 Overview of the Protocol 6.1 System Types There are the following types of system: End Systems: These systems deliver NPDUs to other sys tems and receive NPDUs from other systems, but do not relay NPDUs. This International Standard does not specify any additional End system functions be yond those supplied by ISO 8473 and ISO 9542. Level 1 Intermediate Systems: These systems deliver and receive NPDUs from other systems, and relay NPDUs from other source systems to other destina tion systems. They route directly to systems within their own area, and route towards a level 2 Interme diate system when the destination system is in a dif ferent area. Level 2 Intermediate Systems: These systems act as Level 1 Intermediate systems in addition to acting as a sys tem in the subdomain consisting of level 2 ISs. Sys tems in the level 2 subdomain route towards a desti nation area, or another routeing domain. 6.2 Subnetwork Types There are two generic types of subnetworks supported. a)broadcast subnetworks: These are multi-access subnetworks that support the capability of addressing a group of attached systems with a single NPDU, for instance ISO 8802.3 LANs. b)general topology subnetworks: These are modelled as a set of point-to-point links each of which connects exactly two systems. There are several generic types of general topology subnetworks: 1)multipoint links: These are links between more than two systems, where one system is a primary system, and the remaining systems are secondary (or slave) systems. The primary is capable of direct communication with any of the secondaries, but the secondaries cannot communicate directly among themselves. 2)permanent point-to-point links: These are links that stay connected at all times (unless broken, or turned off by system management), for instance leased lines or private links. 3)dynamically established data links (DEDs): these are links over connection oriented facilities, for in stance X.25, X.21, ISDN, or PSTN networks. Dynamically established data links can be used in one of two ways: i)static point-to-point (Static): The call is estab lished upon system management action and cleared only on system management action (or failure). ii)dynamically assigned (DA): The call is estab lished upon receipt of traffic, and brought down on timer expiration when idle. The ad dress to which the call is to be established is determined dynamically from information in the arriving NPDU(s). No ISIS routeing PDUs are exchanged between ISs on a DA cir cuit. All subnetwork types are treated by the Subnetwork Inde pendent functions as though they were connectionless subnetworks, using the Subnetwork Dependent Conver gence functions of ISO 8473 where necessary to provide a connectionless subnetwork service. The Subnetwork De pendent functions do, however, operate differently on connectionless and connection-oriented subnetworks. 6.3 Topologies A single organisation may wish to divide its Administrative Domain into a number of separate Routeing Domains. This has certain advantages, as described in ISO/TR 9575. Furthermore, it is desirable for an intra-domain routeing protocol to aid in the operation of an inter-domain routeing protocol, where such a protocol exists for interconnecting multiple administrative domains. In order to facilitate the construction of such multi-domain topologies, provision is made for the entering of static inter-domain routeing information. This information is pro vided by a set of Reachable Address Prefixes entered by System Management at the ISs which have links which cross routeing domain boundaries. The prefix indicates that any NSAPs whose NSAP address matches the prefix may be reachable via the SNPA with which the prefix is associ ated. Where the subnetwork to which this SNPA is con nected is a general topology subnetwork supporting dy namically established data links, the prefix also has associ ated with it the required subnetwork addressing information, or an indication that it may be derived from the destination NSAP address (for example, an X.121 DTE address may sometimes be obtained from the IDI of the NSAP address). The Address Prefixes are handled by the level 2 routeing al gorithm in the same way as information about a level 1 area within the domain. NPDUs with a destination address matching any of the prefixes present on any Level 2 Inter mediate System within the domain can therefore be relayed (using level 2 routeing) by that IS and delivered out of the domain. (It is assumed that the routeing functions of the other domain will then be able to deliver the NPDU to its destination.) 6.4 Addresses Within a routeing domain that conforms to this standard, the Network entity titles of Intermediate systems shall be structured as described in 7.1.1. All systems shall be able to generate and forward data PDUs containing NSAP addresses in any of the formats specified by ISO 8348/Add.2. However, NSAP addresses of End systems should be structured as described in 7.1.1 in order to take full advantage of ISIS routeing. Within such a domain it is still possible for some End Systems to have addresses assigned which do not conform to 7.1.1, provided they meet the more general requirements of ISO 8348/Add.2, but they may require additional configura tion and be subject to inferior routeing performance. 6.5 Functional Organisation The intra-domain ISIS routeing functions are divided into two groups -Subnetwork Independent Functions -Subnetwork Dependent Functions 6.5.1 Subnetwork Independent Functions The Subnetwork Independent Functions supply full-duplex NPDU transmission between any pair of neighbour sys tems. They are independent of the specific subnetwork or data link service operating below them, except for recognis ing two generic types of subnetworks: -General Topology Subnetworks, which include HDLC point-to-point, HDLC multipoint, and dynami cally established data links (such as X.25, X.21, and PSTN links), and -Broadcast Subnetworks, which include ISO 8802 LANs. The following Subnetwork Independent Functions are iden tified -Routeing. The routeing function determines NPDU paths. A path is the sequence of connected systems and links between a source ES and a destination ES. The combined knowledge of all the Network Layer entities of all the Intermediate systems within a route ing domain is used to ascertain the existence of a path, and route the NPDU to its destination. The routeing component at an Intermediate system has the follow ing specific functions: 7It extracts and interprets the routeing PCI in an NPDU. 7It performs NPDU forwarding based on the desti nation address. 7It manages the characteristics of the path. If a sys tem or link fails on a path, it finds an alternate route. 7It interfaces with the subnetwork dependent func tions to receive reports concerning an SNPA which has become unavailable, a system that has failed, or the subsequent recovery of an SNPA or system. 7It informs the ISO 8473 error reporting function when the forwarding function cannot relay an NPDU, for instance when the destination is un reachable or when the NPDU would have needed to be segmented and the NPDU requested no seg mentation. -Congestion control. Congestion control manages the resources used at each Intermediate system. 6.5.2 Subnetwork Dependent Functions The subnetwork dependent functions mask the characteris tics of the subnetwork or data link service from the subnetwork independent functions. These include: -Operation of the Intermediate system functions of ISO 9542 on the particular subnetwork, in order to 7Determine neighbour Network entity title(s) and SNPA address(es) 7Determine the SNPA address(s) of operational In termediate systems -Operation of the requisite Subnetwork Dependent Convergence Function as defined in ISO 8473 and its Addendum 3, in order to perform 7Data link initialisation 7Hop by hop fragmentation over subnetworks with small maximum SNSDU sizes 7Call establishment and clearing on dynamically es tablished data links 6.6 Design Goals This International Standard supports the following design requirements. The correspondence with the goals for OSI routeing stated in ISO/TR 9575 are noted. -Network Layer Protocol Compatibility. It is com patible with ISO 8473 and ISO 9542. (See clause 7.5 of ISO/TR 9575), -Simple End systems: It requires no changes to end systems, nor any functions beyond those supplied by ISO 8473 and ISO 9542. (See clause 7.2.1 of ISO/TR 9575), -Multiple Organisations: It allows for multiple route ing and administrative domains through the provision of static routeing information at domain boundaries. (See clause 7.3 of ISO/TR 9575), -Deliverability It accepts and delivers NPDUs ad dressed to reachable destinations and rejects NPDUs addressed to destinations known to be unreachable. -Adaptability. It adapts to topological changes within the routeing domain, but not to traffic changes, except potentially as indicated by local queue lengths. It splits traffic load on multiple equivalent paths. (See clause 7.7 of ISO/TR 9575), -Promptness. The period of adaptation to topological changes in the domain is a reasonable function of the domain diameter (that is, the maximum logical dis tance between End Systems within the domain) and Data link speeds. (See clause 7.4 of ISO/TR 9575), -Efficiency. It is both processing and memory effi cient. It does not create excessive routeing traffic overhead. (See clause 7.4 of ISO/TR 9575), -Robustness. It recovers from transient errors such as lost or temporarily incorrect routeing PDUs. It toler ates imprecise parameter settings. (See clause 7.7 of ISO/TR 9575), -Stability. It stabilises in finite time to good routes, provided no continuous topological changes or con tinuous data base corruptions occur. -System Management control. System Management can control many routeing functions via parameter changes, and inspect parameters, counters, and routes. It will not, however, depend on system management action for correct behaviour. -Simplicity. It is sufficiently simple to permit perform ance tuning and failure isolation. -Maintainability. It provides mechanisms to detect, isolate, and repair most common errors that may affect the routeing computation and data bases. (See clause 7.8 of ISO/TR 9575), -Heterogeneity. It operates over a mixture of network and system types, communication technologies, and topologies. It is capable of running over a wide variety of subnetworks, including, but not limited to: ISO 8802 LANs, ISO 8208 and X.25 subnetworks, PSTN networks, and the OSI Data Link Service. (See clause 7.1 of ISO/TR 9575), -Extensibility. It accommodates increased routeing functions, leaving earlier functions as a subset. -Evolution. It allows orderly transition from algorithm to algorithm without shutting down an entire domain. -Deadlock Prevention. The congestion control compo nent prevents buffer deadlock. -Very Large Domains. With hierarchical routeing, and a very large address space, domains of essentially un limited size can be supported. (See clause 7.2 of ISO/TR 9575), -Area Partition Repair. It permits the utilisation of level 2 paths to repair areas which become partitioned due to failing level 1 links or ISs. (See clause 7.7 of ISO/TR 9575), -Determinism. Routes are a function only of the physi cal topology, and not of history. In other words, the same topology will always converge to the same set of routes. -Protection from Mis-delivery. The probability of mis-delivering a NPDU, i.e. delivering it to a Trans port entity in the wrong End System, is extremely low. -Availability. For domain topologies with cut set greater than one, no single point of failure will parti tion the domain. (See clause 7.7 of ISO/TR 9575), -Service Classes. The service classes of transit delay, expense22Expense is referred to as cost in ISO 8473. The latter term is not used here because of possible confusion with the more general usage of the term to indicate path cost according to any routeing metric. , and residual error probability of ISO 8473 are supported through the optional inclusion of multi ple routeing metrics. -Authentication. The protocol is capable of carrying information to be used for the authentication of Inter mediate systems in order to increase the security and robustness of a routeing domain. The specific mecha nism supported in this International Standard how ever, only supports a weak form of authentication us ing passwords, and thus is useful only for protection against accidental misconfiguration errors and does not protect against any serious security threat. In the future, the algorithms may be enhanced to provide stronger forms of authentication than can be provided with passwords without needing to change the PDU encoding or the protocol exchange machinery. 6.6.1 Non-Goals The following are not within the design scope of the intra- domain ISIS routeing protocol described in this Interna tional Standard: -Traffic adaptation. It does not automatically modify routes based on global traffic load. -Source-destination routeing. It does not determine routes by source as well as destination. -Guaranteed delivery. It does not guarantee delivery of all offered NPDUs. -Level 2 Subdomain Partition Repair. It will not util ise Level 1 paths to repair a level 2 subdomain parti tion. For full logical connectivity to be available, a connected level 2 subdomain is required. -Equal treatment for all ES Implementations. The End system poll function defined in 8.4.5 presumes that End systems have implemented the Suggested ES Configuration Timer option of ISO 9542. An End sys tem which does not implement this option may experi ence a temporary loss of connectivity following cer tain types of topology changes on its local subnetwork. 6.7 Environmental Requirements For correct operation of the protocol, certain guarantees are required from the local environment and the Data Link Layer. The required local environment guarantees are: a)Resource allocation such that the certain minimum re source guarantees can be met, including 1)memory (for code, data, and buffers) 2)processing; See 12.2.5 for specific performance levels required for conformance b)A quota of buffers sufficient to perform routeing func tions; c)Access to a timer or notification of specific timer expi ration; and d)A very low probability of corrupting data. The required subnetwork guarantees for point-to-point links are: a)Provision that both source and destination systems complete start-up before PDU exchange can occur; b)Detection of remote start-up; c)Provision that no old PDUs be received after start-up is complete; d)Provision that no PDUs transmitted after a particular startup is complete are delivered out of sequence; e)Provision that failure to deliver a specific subnetwork SDU will result in the timely disconnection of the subnetwork connection in both directions and that this failure will be reported to both systems; and f)Reporting of other subnetwork failures and degraded subnetwork conditions. The required subnetwork guarantees for broadcast links are: a)Multicast capability, i.e., the ability to address a subset of all connected systems with a single PDU; b)The following events are low probability, which means that they occur sufficiently rarely so as not to impact performance, on the order of once per thou sand PDUs 1)Routeing PDU non-sequentiality, 2)Routeing PDU loss due to detected corruption; and 3)Receiver overrun; c)The following events are very low probability, which means performance will be impacted unless they are extremely rare, on the order of less than one event per four years 1)Delivery of NPDUs with undetected data corrup tion; and 2)Non-transitive connectivity, i.e. where system A can receive transmissions from systems B and C, but system B cannot receive transmissions from system C. The following services are assumed to be not available from broadcast links: a)Reporting of failures and degraded subnetwork condi tions that result in NPDU loss, for instance receiver failure. The routeing functions are designed to account for these failures. 6.8 Functional Organisation of Subnetwork Independent Components The Subnetwork Independent Functions are broken down into more specific functional components. These are de scribed briefly in this sub-clause and in detail in clause 7. This International Standard uses a functional decomposition adapted from the model of routeing presented in clause 5.1 of ISO/TR 9575. The decomposition is not identical to that in ISO/TR 9575, since that model is more general and not specifically oriented toward a detailed description of intra- domain routeing functions such as supplied by this proto col. The functional decomposition is shown below in figure 2. 6.8.1 Routeing The routeing processes are: -Decision Process -Update Process NOTE this comprises both the Information Collection and Information Distribution components identified in ISO/TR 9575. -Forwarding Process -Receive Process 6.8.1.1 Decision Process This process calculates routes to each destination in the do main. It is executed separately for level 1 and level 2 route ing, and separately within each level for each of the route ing metrics supported by the Intermediate system. It uses the Link State Database, which consists of information from the latest Link State PDUs from every other Interme diate system in the area, to compute shortest paths from this IS to all other systems in the area 9in figure 2. The Link State Data Base is maintained by the Update Process. Execution of the Decision Process results in the determina tion of [circuit, neighbour] pairs (known as adjacencies), which are stored in the appropriate Forwarding Information base 10 and used by the Forwarding process as paths along which to forward NPDUs. Several of the parameters in the routeing data base that the Decision Process uses are determined by the implementa tion. These include: -maximum number of Intermediate and End systems within the IS's area; -maximum number of Intermediate and End system neighbours of the IS, etc., so that databases can be sized appropriately. Also parame ters such as -routeing metrics for each circuit; and -timers can be adjusted for enhanced performance. The complete list of System Management set-able parameters is listed in clause 11. 6.8.1.2 Update Process This process constructs, receives and propagates Link State PDUs. Each Link State PDU contains information about the identity and routeing metric values of the adjacencies of the IS that originated the Link State PDU. The Update Process receives Link State and Sequence Numbers PDUs from the Receive Process 4in figure 2. It places new routeing information in the routeing infor mation base 6 and propagates routeing information to other Intermediate systems 7and 8 . General characteristics of the Update Process are: -Link State PDUs are generated as a result of topologi cal changes, and also periodically. They may also be generated indirectly as a result of System Manage ment actions (such as changing one of the routeing metrics for a circuit). -Level 1 Link State PDUs are propagated to all Inter mediate systems within an area, but are not propa gated out of an area. -Level 2 Link State PDUs are propagated to all Level 2 Intermediate systems in the domain. -Link State PDUs are not propagated outside of a do main. -The update process, through a set of System Manage ment parameters, enforces an upper bound on the amount of routeing traffic overhead it generates. 6.8.1.3 Forwarding Process This process supplies and manages the buffers necessary to support NPDU relaying to all destinations. It receives, via the Receive Process, ISO 8473 PDUs to be forwarded 5 in figure 2. It performs a lookup in the appropriate33The appropriate Forwarding Database is selected by choosing a routeing metric based on fields in the QoS Maintenance option of ISO 8473. Forwarding Data base 11 to determine the possible output adjacencies to use for forwarding to a given destination, chooses one adjacency 12, generates error indications to ISO 8473 14 , and signals ISO 9542 to issue Redirect PDUs 13 6.8.1.4 Receive Process The Receive Process obtains its inputs from the following sources -received PDUs with the NPID of Intra-Domain route ing 2 in figure 2, -routeing information derived by the ESIS protocol from the receipt of ISO 9542 PDUs 1; and -ISO 8473 data PDUs handed to the routeing function by the ISO 8473 protocol machine 3. It then performs the appropriate actions, which may involve passing the PDU to some other function (e.g. to the For warding Process for forwarding 5). 7 Subnetwork Independent Functions This clause describes the algorithms and associated data bases used by the routeing functions. The managed objects and attributes defined for System Management purposes are described in clause 11. The following processes and data bases are used internally by the subnetwork independent functions. Following each process or data base title, in parentheses, is the type of sys tems which must keep the database. The system types are L2 (level 2 Intermediate system), and L1 (level 1 Inter mediate system). Note that a level 2 Intermediate system is also a level 1 Intermediate system in its home area, so it must keep level 1 databases as well as level 2 databases. Processes: -Decision Process (L2, L1) -Update Process (L2, L1) -Forwarding Process (L2, L1) -Receive Process (L2, L1) Databases: -Level 1 Link State data base (L2, L1) -Level 2 Link State data base (L2) -Adjacency Database (L2, L1) -Circuit Database (L2, L1) -Level 1 Shortest Paths Database (L2, L1) -Level 2 Shortest Paths Database (L2) -Level 1 Forwarding Databases one per routeing metric (L2, L1) -Level 2 Forwarding Database one per routeing metric (L2) 7.1 Addresses The NSAP addresses and NETs of systems are variable length quantities that conform to the requirements of ISO 8348/Add.2. The corresponding NPAI contained in ISO 8473 PDUs and in this protocol's PDUs (such as LSPs and IIHs) must use the preferred binary encoding; the underly ing syntax for this information may be either abstract binary syntax or abstract decimal syntax. Any of the AFIs and their corresponding DSP syntax may be used with this pro tocol. 7.1.1 NPAI Of Systems Within A Routeing Domain Figure 3 illustrates the structure of an encoded NSAP ad dress or NET. The structure of the NPAI will be interpreted in the follow ing way by the protocol described in this international stan dard: Area Address address of one area within a routeing domain a variable length quantity consisting of the entire high- order part of the NPAI, excluding the ID and SEL fields, defined below. ID System identifier a variable length field from 1 to 8 octets (inclusive). Each routeing domain employ ing this protocol shall select a single size for the ID field and all Intermediate systems in the routeing do main shall use this length for the system IDs of all systems in the routeing domain. The set of ID lengths supported by an implementa tion is an implementation choice, provided that at least one value in the permitted range can be ac cepted. The routeing domain administrator must en sure that all ISs included in a routeing domain are able to use the ID length chosen for that domain. SEL NSAP Selector a 1-octet field which acts as a se lector for the entity which is to receive the PDU(this may be a Transport entity or the Intermediate system Network entity itself). It is the least significant (last) octet of the NPAI. 7.1.2 Deployment of Systems For correct operation of the routeing protocol defined in this international standard, systems deployed in a routeing domain must meet the following requirements: a)For all systems: 1)Each system in an area must have a unique sys temID: that is, no two systems (IS or ES) in an area can use the same ID value. 2)Each area address must be unique within the global OSIE: that is, a given area address can be associ ated with only one area. 3)All systems having a given value of area address must be located in the same area. b)Additional Requirements for Intermediate systems: 1)Each Level 2 Intermediate system within a route ing domain must have a unique value for its ID field: that is, no two level 2 ISs in a routeing do main can have the same value in their ID fields. c)Additional Requirements for End systems: 1)No two End systems in an area may have ad dresses that match in all but the SEL fields. d)An End system can be attached to a level 1 IS only if its area address matches one of the entries in the adja cent IS's manual

Area

Addresses parameter. It is the responsibility of the routeing domain's administra tive authority to enforce the requirements of 7.1.2. The pro tocol defined in this international standard assumes that these requirements are met, but has no means to verify compliance with them. 7.1.3 Manual area addresses The use of several synonymous area addresses by an IS is accommodated through the use of the management parame ter manual

Area

Addresses. This parameter is set locally for each level 1 IS by system management; it contains a list of all synonymous area addresses associated with the IS, in cluding the IS's area address as contained in its own NET. Each level 1 IS distributes its manual

Area

Addresses in its Level 1 LSP's Area Addresses field, thus allowing level 2 ISs to create a composite list of all area addresses supported within a given area. Level 2 ISs in turn advertise the composite list throughout the level 2 subdomain by in cluding it in their Level 2 LSP's Area Addresses field, thus distributing information on all the area addresses asso ciated with the entire routeing domain. The procedures for establishing an adjacency between two level 1 ISs require that there be at least one area address in common between their two manual

Area

Addresses lists, and the proce dures for establishing an adjacency between a level 1 Is and an End system require that the End system's area address must match an entry in the IS's manual

Area

Addresses list. Therefore, it is the responsibility of System Manage ment to ensure that each area address associated with an IS is included: in particular, system management must ensure that the area addresses of all ESs and Level 1 ISs adjacent to a given level 1 IS are included in that IS's manual

Area

Addresses list. If the area address field for the destination address of an 8473 PDU or for the next entry in its source routeing field, when present is not listed in the parameter area

Addresses of a level 1 IS receiving the PDU, then the destination system does not reside in the IS's area. Such PDUs will be routed by level-2 routeing. 7.1.4 Encoding of Level 2 Addresses When a full NSAP address is encoded according to the pre ferred binary encoding specified in ISO 8348/Add.2, the IDI is padded with leading digits (if necessary) to obtain the maximum IDP length specified for that AFI. A Level 2 address prefix consists of a leading sub-string of a full NSAP address, such that it matches a set of full NSAP addresses that have the same leading sub-string. However this truncation and matching is performed on the NSAP represented by the abstract syntax of the NSAP ad dress, not on the encoded (and hence padded) form.11An example of prefix matching may be found in annex B, clause B.1. Level 2 address prefixes are encoded in LSPs in the same way as full NSAP addresses, except when the end of the prefix falls within the IDP. In this case the prefix is directly encoded as the string of semi-octets with no padding. 7.1.5 Comparison of Addresses Unless otherwise stated, numerical comparison of addresses shall be performed on the encoded form of the address, by padding the shorter address with trailing zeros to the length of the longer address, and then performing a numerical comparison. The addresses to which this precedure applies include NSAP addresses, Network Entity Titles, and SNPA ad dresses. 7.2 The Decision Process This process uses the database of Link State information to calculate the forwarding database(s), from which the for warding process can know the proper next hop for each NPDU. The Level 1 Link State Database is used for calcu lating the Level 1 Forwarding Database(s), and the Level 2 Link State Database is used for calculating the Level 2 For warding Database(s). 7.2.1 Input and output INPUT -Link State Database This database is a set of infor mation from the latest Link State PDUs from all known Intermediate systems (within this area, for Level 1, or within the level 2 subdomain, for Level 2). This database is received from the Update Process. -Notification of an Event This is a signal from the Update Process that a change to a link has occurred somewhere in the domain. OUTPUT -Level 1 Forwarding Databases one per routeing metric -(Level 2 Intermediate systems only) Level 2 Forward ing Databases one per routeing metric -(Level 2 Intermediate systems only) The Level 1 De cision Process informs the Level 2 Update Process of the ID of the Level 2 Intermediate system within the area with lowest ID reachable with real level 1 links (as opposed to a virtual link consisting of a path through the level 2 subdomain) -(Level 2 Intermediate systems only) If this Intermedi ate system is the Partition Designated Level 2 Inter mediate system in this partition, the Level 2 Decision Process informs the Level 1 Update Process of the values of the default routeing metric to and ID of the partition designated level 2 Intermediate system in each other partition of this area. 7.2.2 Routeing metrics There are four routeing metrics defined, corresponding to the four possible orthogonal qualities of service defined by the QoS Maintenance field of ISO 8473. Each circuit ema nating from an Intermediate system shall be assigned a value for one or more of these metrics by System manage ment. The four metrics are as follows: a)Default metric: This is a metric understood by every Intermediate system in the domain. Each circuit shall have a positive integral value assigned for this metric. The value may be associated with any objective func tion of the circuit, but by convention is intended to measure the capacity of the circuit for handling traffic, for example, its throughput in bits-per-second. Higher values indicate a lower capacity. b)Delay metric: This metric measures the transit delay of the associated circuit. It is an optional metric, which if assigned to a circuit shall have a positive integral value. Higher values indicate a longer transit delay. c)Expense metric: This metric measures the monetary cost of utilising the associated circuit. It is an optional metric, which if assigned to a circuit shall have a posi tive integral value22The path computation algorithm utilised in this International Standard requires that all circuits be assigned a positive value for a metric. Therefore, it is not possible to represent a free circuit by a zero value of the expense metric. By convention, the value 1 is used to indicate a free circuit. . Higher values indicate a larger monetary expense. d)Error metric: This metric measures the residual error probability of the associated circuit. It is an optional metric, which if assigned to a circuit shall have a non- zero value. Higher values indicate a larger probability of undetected errors on the circuit. NOTE - The decision process combines metric values by simple addition. It is important, therefore, that the values of the metrics be chosen accordingly. Every Intermediate system shall be capable of calculating routes based on the default metric. Support of any or all of the other metrics is optional. If an Intermediate system sup ports the calculation of routes based on a metric, its update process may report the metric value in the LSPs for the as sociated circuit; otherwise, the IS shall not report the met ric. When calculating paths for one of the optional routeing metrics, the decision process only utilises LSPs with a value reported for the corresponding metric. If no value is associated with a metric for any of the IS's circuits the sys tem shall not calculate routes based on that metric. NOTE - A consequence of the above is that a system reach able via the default metric may not be reachable by another metric. See 7.4.2 for a description of how the forwarding process selects one of these metrics based on the contents of the ISO 8473 QoS Maintenance option. Each of the four metrics described above may be of two types: an Internal metric or an External metric. Internal metrics are used to describe links/routes to destinations in ternal to the routeing domain. External metrics are used to describe links/routes to destinations outside of the routeing domain. These two types of metrics are not directly compa rable, except the internal routes are always preferred over external routes. In other words an internal route will always be selected even if an external route with lower total cost exists. 7.2.3 Broadcast Subnetworks Instead of treating a broadcast subnetwork as a fully con nected topology, the broadcast subnetwork is treated as a pseudonode, with links to each attached system. Attached systems shall only report their link to the pseudonode. The designated Intermediate system, on behalf of the pseudonode, shall construct Link State PDUs reporting the links to all the systems on the broadcast subnetwork with a zero value for each supported routeing metric33They are set to zero metric values since they have already been assigned metrics by the link to the pseudonode. Assigning a non-zero value in the pseudonode LSP would have the effect of doubling the actual value. . The pseudonode shall be identified by the sourceID of the Designated Intermediate system, followed by a non-zero pseudonodeID assigned by the Designated Intermediate system. The pseudonodeID is locally unique to the Desig nated Intermediate system. Designated Intermediate systems are determined separately for level 1 and level 2. They are known as the LAN Level 1 Designated IS and the LAN Level 2 Designated IS respec tively. See 8.4.4. An Intermediate system may resign as Designated Interme diate System on a broadcast circuit either because it (or it's SNPA on the broadcast subnetwork) is being shut down or because some other Intermediate system of higher priority has taken over that function. When an Intermediate system resigns as Designated Intermediate System, it shall initiate a network wide purge of its pseudonode Link State PDU(s) by setting their Remaining Lifetime to zero and performing the actions described in 7.3.16.4. A LAN Level 1 Desig nated Intermediate System purges Level 1 Link State PDUs and a LAN Level 2 Designated Intermediate System purges Level 2 Link State PDUs. An Intermediate system which has resigned as both Level 1 and Level 2 Designated Inter mediate System shall purge both sets of LSPs. When an Intermediate system declares itself as designated Intermediate system and it is in possession of a Link State PDU of the same level issued by the previous Designated Intermediate System for that circuit (if any), it shall initiate a network wide purge of that (or those) Link State PDU(s) as above. 7.2.4 Links Two Intermediate systems are not considered neighbours unless each reports the other as directly reachable over one of their SNPAs. On a Connection-oriented subnetwork (either point-to-point or general topology), the two Interme diate systems in question shall ascertain their neighbour re lationship when a connection is established and hello PDUs exchanged. A malfunctioning IS might, however, report an other IS to be a neighbour when in fact it is not. To detect this class of failure the decision process checks that each link reported as up in a LSP is so reported by both Inter mediate systems. If an Intermediate system considers a link down it shall not mention the link in its Link State PDUs. On broadcast subnetworks, this class of failure shall be de tected by the designated IS, which has the responsibility to ascertain the set of Intermediate systems that can all com municate on the subnetwork. The designated IS shall in clude these Intermediate systems (and no others) in the Link State PDU it generates for the pseudonode represent ing the broadcast subnetwork. 7.2.5 Multiple LSPs for the same system The Update process is capable of dividing a single logical LSP into a number of separate PDUs for the purpose of conserving link bandwidth and processing (see 7.3.4). The Decision Process, on the other hand, shall regard the LSP with LSP Number zero in a special way. If the LSP with LSP Number zero and remaining lifetime > 0, is not present for a particular system then the Decision Process shall not process any LSPs with non-zero LSP Number which may be stored for that system. The following information shall be taken only from the LSP with LSP Number zero. Any values which may be present in other LSPs for that system shall be disregarded by the Decision Process. a)The setting of the LSP Database Overload bit. b)The value of the IS Type field. c)The Area Addresses option. 7.2.6 Routeing Algorithm Overview The routeing algorithm used by the Decision Process is a shortest path first (SPF) algorithm. Instances of the algo rithm are run independently and concurrently by all Inter mediate systems in a routeing domain. Intra-Domain route ing of a PDU occurs on a hop-by-hop basis: that is, the al gorithm determines only the next hop, not the complete path, that a data PDU will take to reach its destination. To guarantee correct and consistent route computation by every Intermediate system in a routeing domain, this Inter national Standard depends on the following properties: a)All Intermediate systems in the routeing domain con verge to using identical topology information; and b)Each Intermediate system in the routeing domain gen erates the same set of routes from the same input to pology and set of metrics. The first property is necessary in order to prevent inconsis tent, potentially looping paths. The second property is nec essary to meet the goal of determinism stated in 6.6. A system executes the SPF algorithm to find a set of legal paths to a destination system in the routeing domain. The set may consist of: a)a single path of minimum metric sum: these are termed minimum cost paths; b)a set of paths of equal minimum metric sum: these are termed equal minimum cost paths; or c)a set of paths which will get a PDU closer to its desti nation than the local system: these are called down stream paths. Paths which do not meet the above conditions are illegal and shall not be used. The Decision Process, in determining its paths, also ascer tains the identity of the adjacency which lies on the first hop to the destination on each path. These adjacencies are used to form the Forwarding Database, which the forward ing process uses for relaying PDUs. Separate route calculations are made for each pairing of a level in the routeing hierarchy (i.e. L1 and L2) with a sup ported routeing metric. Since there are four routeing metrics and two levels some systems may execute multiple in stances of the SPF algorithm. For example, -if an IS is a L2 Intermediate system which supports all four metrics and computes minimum cost paths for all metrics, it would execute the SPF calculation eight times. -if an IS is a L1 Intermediate system which supports all four metrics, and additionally computes downstream paths, it would execute the algorithm 4 W (number of neighbours + 1) times. Any implementation of an SPF algorithm meeting both the static and dynamic conformance requirements of clause 12 of this International Standard may be used. Recommended implementations are described in detail in Annex C. 7.2.7 Removal of Excess Paths When there are more than max

i

mum

Path

Splits legal paths to a destination, this set shall be pruned until only max

i

mum

Path

Splits remain. The Intermediate system shall discriminate based upon: NOTE - The precise precedence among the paths is speci fied in order to meet the goal of determinism defined in 6.6. -adjacency type: Paths associated with End system or level 2 reachable address prefix adjacencies are re tained in preference to other adjacencies -metric sum: Paths having a lesser metric sum are re tained in preference to paths having a greater metric sum. By metric sum is understood the sum of the metrics along the path to the destination. -neighbour ID: where two or more paths are associ ated with adjacencies of the same type, an adjacency with a lower neighbour ID is retained in preference to an adjacency with a higher neighbour id. -circuit ID: where two or more paths are associated with adjacencies of the same type, and same neigh bour ID, an adjacency with a lower circuit ID is re tained in preference to an adjacency with a higher cir cuit ID, where circuit ID is the value of: 7ptPtCircuitID for non-broadcast circuits, 7l1CircuitID for broadcast circuits when running the Level 1 Decision Process, and 7l2CircuitID for broadcast circuits when running the Level 2 Decision Process. -lANAddress: where two or more adjacencies are of the same type, same neighbour ID, and same circuit ID (e.g. a system with multiple LAN adapters on the same circuit) an adjacency with a lower lANAddress is retained in preference to an adjacency with a higher lANAddress. 7.2.8 Robustness Checks 7.2.8.1 Computing Routes through Overloaded Intermediate systems The Decision Process shall not utilise a link to an Interme diate system neighbour from an IS whose LSPs have the LSP Database Overload indication set. Such paths may in troduce loops since the overloaded IS does not have a com plete routeing information base. The Decision Process shall, however utilise the link to reach End system neighbours since these paths are guaranteed to be non-looping. 7.2.8.2 Two-way connectivity check The Decision Process shall not utilise a link between two Intermediate Systems unless both ISs report the link. NOTE - the check is not applicable to links to an End Sys tem. Reporting the link indicates that it has a defined value for at least the default routeing metric. It is permissible for two endpoints to report different defined values of the same metric for the same link. In this case, routes may be asym metric.
lanL2DesignatedIntermediateSystemChanges ATTRIBUTE DERIVED FROM nonWrappingCounter; BEHAVIOUR lanL2DesignatedIntermediateSystemChanges-B BEHAVIOUR DEFINED AS Number of LAN L2 Designated Inter mediate System Change events generated;; REGISTERED AS {ISO10589-ISIS.aoi lanL2DesignatedIntermediateSystemChanges (76)}; adjacencyName ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.GraphicString; MATCHES FOR Equality, Substrings; BEHAVIOUR adjacencyName-B BEHAVIOUR DEFINED AS A string which is the Identifier for the Adjacency and which is unique amongst the set of Adjacencies maintained for this Circuit. If this is a manually created adjacency (i.e. the type is Manual) it is set by the System Manager when the Adjacency is created, otherwise it is generated by the imple mentation such that it is unique. The set of identifier containing the leading string "Auto" are reserved for Automatic Adjacencies. An attempt to create a Man ual Adjacency with such an identifier will cause an exception to be raised;; REGISTERED AS {ISO10589-ISIS.aoi adjacencyName (77)}; adjacencyState ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.AdjacencyState; MATCHES FOR Equality; BEHAVIOUR adjacencyState-B BEHAVIOUR DEFINED AS The state of the adjacency;; REGISTERED AS {ISO10589-ISIS.aoi adjacencyState (78)}; neighbourLANAddress ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.LANAddress; MATCHES FOR Equality; BEHAVIOUR neighbourLANAddress-B BEHAVIOUR DEFINED AS The MAC address of the neighbour sys tem on a broadcast circuit;, replaceOnlyWhileDisabled-B; PARAMETERS constraintViolation; REGISTERED AS {ISO10589-ISIS.aoi neighbourLANAddress (79)}; neighbourSystemType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.NeighbourSystemType; MATCHES FOR Equality; BEHAVIOUR neighbourSystemType-B BEHAVIOUR DEFINED AS The type of the neighbour system one of: Unknown End system Intermediate system L1 Intermediate system L2 Intermediate system;; REGISTERED AS {ISO10589-ISIS.aoi neighbourSystemType (80)}; sNPAAddress ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.SNPAAddress; MATCHES FOR Equality; BEHAVIOUR sNPAAddress-B BEHAVIOUR DEFINED AS The SNPA Address of the neighbour system on an X.25 circuit;, replaceOnlyWhileDisabled-B; PARAMETERS constraintViolation; REGISTERED AS {ISO10589-ISIS.aoi sNPAAddress (81)}; adjacencyUsageType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.AdjacencyUsageType; MATCHES FOR Equality; BEHAVIOUR level-B BEHAVIOUR DEFINED AS The usage of the Adjacency. An Adjacency of type Level 1" will be used for Level 1 traffic only. An adjacency of type Level 2" will be used for Level 2 traffic only. An adjacency of type Level 1 and 2" will be used for both Level 1 and Level 2 traffic. There may be two adjacencies (of types Level 1" and Level 2" between the same pair of Intermediate Systems.;; REGISTERED AS {ISO10589-ISIS.aoi adjacencyUsageType (82)}; neighbourSystemID ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.SystemID; MATCHES FOR Equality; BEHAVIOUR neighbourSystemID-B BEHAVIOUR DEFINED AS The SystemID of the neighbouring In termediate system from the Source ID field of the neighbour's IIH PDU. The Intermediate System ID for this neighbour is derived by appending zero to this value.;; REGISTERED AS {ISO10589-ISIS.aoi neighbourSystemID (83)}; neighbourAreas ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.AreaAddresses; MATCHES FOR Equality, Set Comparison, Set Intersection; BEHAVIOUR neighbourAreas-B BEHAVIOUR DEFINED AS This contains the Area Addresses of a neighbour Intermediate System from the IIH PDU.;; REGISTERED AS {ISO10589-ISIS.aoi neighbourAreas (84)}; holdingTimer ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.HoldingTimer; MATCHES FOR Equality, Ordering; BEHAVIOUR holdingTimer-B BEHAVIOUR DEFINED AS Holding time for this adjacency updated from the IIH PDUs;; REGISTERED AS {ISO10589-ISIS.aoi holdingTimer (85)}; lANPriority ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.IntermediateSystemPriority; MATCHES FOR Equality, Ordering; BEHAVIOUR lANPriority-B BEHAVIOUR DEFINED AS Priority of neighbour on this adjacency for becoming LAN Level 1 Designated Intermediate System if adjacencyType is L1 Intermediate System or LAN Level 2 Designated Intermediate System if adjacencyType is L2 Intermediate System;; REGISTERED AS {ISO10589-ISIS.aoi lANPriority (86)}; endSystemIDs ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.EndSystemIDs; MATCHES FOR Equality, Set Comparison, Set Intersection; BEHAVIOUR endSystemIDs-B BEHAVIOUR DEFINED AS This contains the system ID(s) of a neighbour End system. Where (in a Intermediate System) an adjacency has been created manually, these will be the set of IDs given in the manualIDs parameter of the create directive.;; REGISTERED AS {ISO10589-ISIS.aoi endSystemIDs (87)}; networkEntityTitle ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.NetworkEntityTitle; MATCHES FOR Equality, Ordering; BEHAVIOUR networkEntityTitle-B BEHAVIOUR DEFINED AS The Network entity Title which is the destination of a Virtual link being used to repair a partitioned Level 1 area (see clause 7.2.10);; REGISTERED AS {ISO10589-ISIS.aoi networkEntityTitle (88)}; metric ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.PathMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR metric-B BEHAVIOUR DEFINED AS Cost of least cost L2 path(s) to destina tion area based on the default metric;; REGISTERED AS {ISO10589-ISIS.aoi metric (89)}; defaultMetricPathCost ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.PathMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR defaultMetricPathCost-B BEHAVIOUR DEFINED AS Cost of least cost path(s) using the de fault metric to destination;; REGISTERED AS {ISO10589-ISIS.aoi defaultMetricPathCost (90)}; defaultMetricOutputAdjacencies ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.OutputAdjacencies; MATCHES FOR Equality, Set Comparison, Set Intersection; BEHAVIOUR defaultMetricOutputAdjacencies-B BEHAVIOUR DEFINED AS The set of Adjacency (or Reachable Ad dress) managed object identifiers representing the forwarding decisions based upon the default metric for the destination;; REGISTERED AS {ISO10589-ISIS.aoi defaultMetricOutputAdjacencies (91)}; delayMetricPathCost ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.PathMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR delayMetricPathCost-B BEHAVIOUR DEFINED AS Cost of least cost path(s) using the delay metric to destination;; REGISTERED AS {ISO10589-ISIS.aoi delayMetricPathCost (92)}; delayMetricOutputAdjacencies ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.OutputAdjacencies; MATCHES FOR Equality, Set Comparison, Set Intersection; BEHAVIOUR delayMetricOutputAdjacencies-B BEHAVIOUR DEFINED AS The set of Adjacency (or Reachable Ad dress) managed object identifiers representing the forwarding decisions based upon the delay metric for the destination;; REGISTERED AS {ISO10589-ISIS.aoi delayMetricOutputAdjacencies (93)}; expenseMetricPathCost ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.PathMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR expenseMetricPathCost-B BEHAVIOUR DEFINED AS Cost of least cost path(s) using the ex pense metric to destination;; REGISTERED AS {ISO10589-ISIS.aoi expenseMetricPathCost (94)}; expenseMetricOutputAdjacencies ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.OutputAdjacencies; MATCHES FOR Equality, Set Comparison, Set Intersection; BEHAVIOUR expenseMetricOutputAdjacencies-B BEHAVIOUR DEFINED AS The set of Adjacency (or Reachable Ad dress) managed object identifiers representing the forwarding decisions based upon the expense metric for the destination;; REGISTERED AS {ISO10589-ISIS.aoi expenseMetricOutputAdjacencies (95)}; errorMetricPathCost ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.PathMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR errorMetricPathCost-B BEHAVIOUR DEFINED AS Cost of least cost path(s) using the error metric to destination;; REGISTERED AS {ISO10589-ISIS.aoi errorMetricPathCost (96)}; errorMetricOutputAdjacencies ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.OutputAdjacencies; MATCHES FOR Equality, Set Comparison, Set Intersection; BEHAVIOUR errorMetricOutputAdjacencies-B BEHAVIOUR DEFINED AS The set of Adjacency (or Reachable Ad dress) managed object identifiers representing the forwarding decisions based upon the error metric for the destination;; REGISTERED AS {ISO10589-ISIS.aoi errorMetricOutputAdjacencies (97)}; addressPrefix ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.AddressPrefix; MATCHES FOR Equality, Substrings; BEHAVIOUR addressPrefix-B BEHAVIOUR DEFINED AS An Area Address (or prefix) of a desti nation area;; REGISTERED AS {ISO10589-ISIS.aoi addressPrefix (98)}; defaultMetric ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.HopMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR defaultMetric-B BEHAVIOUR DEFINED AS The default metric value for reaching the specified prefix over this Circuit. If this attribute is changed while both the Reachable Address and the Circuit are Enabled (i.e. state On), the actions described in clause 8.3.5.4 must be taken. The value of zero is reserved to indicate that this metric is not supported;; REGISTERED AS {ISO10589-ISIS.aoi defaultMetric (99)}; delayMetric ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.HopMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR delayMetric-B BEHAVIOUR DEFINED AS The delay metric value for reaching the specified prefix over this Circuit.BEHAVIOURIf this attribute is changed while both the Reachable Address and the Circuit are Enabled (i.e. state On), the actions described in clause 8.3.5.4 must be taken. The value of zero is reserved to indicate that this metric is not supported;; REGISTERED AS {ISO10589-ISIS.aoi delayMetric (100)}; expenseMetric ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.HopMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR expenseMetric-B BEHAVIOUR DEFINED AS The expense metric value for reaching the specified prefix over this Circuit. If this attribute is changed while both the Reachable Address and the Circuit are Enabled (i.e. state On), the actions described in clause 8.3.5.4 must be taken. The value of zero is reserved to indicate that this metric is not supported;; REGISTERED AS {ISO10589-ISIS.aoi expenseMetric (101)}; errorMetric ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.HopMetric; MATCHES FOR Equality, Ordering; BEHAVIOUR errorMetric-B BEHAVIOUR DEFINED AS The error metric value for reaching the specified prefix over this Circuit. If this attribute is changed while both the Reachable Address and the Circuit are Enabled (i.e. state On), the actions de scribed in clause 8.3.5.4 must be taken. The value of zero is reserved to indicate that this metric is not supported;; REGISTERED AS {ISO10589-ISIS.aoi errorMetric (102)}; defaultMetricType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.MetricType; MATCHES FOR Equality; BEHAVIOUR defaultMetricType-B BEHAVIOUR DEFINED AS Indicates whether the default metric is internal or external;; REGISTERED AS {ISO10589-ISIS.aoi defaultMetricType (103)}; delayMetricType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.MetricType; MATCHES FOR Equality; BEHAVIOUR delayMetricType-B BEHAVIOUR DEFINED AS Indicates whether the delay metric is in ternal or external;; REGISTERED AS {ISO10589-ISIS.aoi delayMetricType (104)}; expenseMetricType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.MetricType; MATCHES FOR Equality; BEHAVIOUR expenseMetricType-B BEHAVIOUR DEFINED AS Indicates whether the expense metric is internal or external;; REGISTERED AS {ISO10589-ISIS.aoi expenseMetricType (105)}; errorMetricType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.MetricType; MATCHES FOR Equality; BEHAVIOUR errorMetricType-B BEHAVIOUR DEFINED AS Indicates whether the error metric is in ternal or extternal;; REGISTERED AS {ISO10589-ISIS.aoi errorMetricType (106)}; mappingType ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.MappingType; MATCHES FOR Equality; BEHAVIOUR mappingType-B BEHAVIOUR DEFINED AS The type of mapping to be employed to ascertain the SNPA Address to which a call should be placed for this prefix. X.121 indicates that the X.121 address extraction algorithm is to be em ployed. This will extract the SNPA address from the IDI of an X.121 format IDP of the NSAP address to which the NPDU is to be forwarded. Manual indi cates that the set of addresses in the sNPAAddresses or LANAddresses characteristic are to be used. For Broadcast circuits, only the value Manual is permit ted;; REGISTERED AS {ISO10589-ISIS.aoi mappingType (107)}; lANAddress ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.LANAddress; MATCHES FOR Equality; BEHAVIOUR lANAddress-B BEHAVIOUR DEFINED AS Asingle LAN addresses to which an NPDU may be directed in order to reach an address which matches the address prefix of the Reachable Address. An exception is raised if an attempt is made to enable the Reachable Address with the de fault value;; REGISTERED AS {ISO10589-ISIS.aoi lANAddress (108)}; sNPAAddresses ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.SNPAAddresses; MATCHES FOR Equality; BEHAVIOUR sNPAAddresses-B BEHAVIOUR DEFINED AS A set of SNPA addresses to which a call may be directed in order to reach an address which matches the address prefix of the Reachable Ad dress. Associated with each SNPA Address, but not visible to System Management, is a variable lastFail ure of Type BinaryAbsoluteTime;; REGISTERED AS {ISO10589-ISIS.aoi sNPAAddresses (109)}; nonWrappingCounter ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.NonWrappingCounter; MATCHES FOR Equality, Ordering; BEHAVIOUR nonWrappingCounter-B BEHAVIOUR DEFINED AS Non-replaceable, non-wrapping counter;; -- This attibute is only defined in order to allow other counter attributes to be derived from it. REGISTERED AS {ISO10589-ISIS.aoi nonWrappingCounter (110)}; areaTransmitPassword ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.Password; MATCHES FOR Equality; BEHAVIOUR areaTransmitPassword-B BEHAVIOUR DEFINED AS The value to be used as a transmit pass word in Level 1 LSP, and SNP PDUs transmitted by this Intermediate System;; REGISTERED AS {ISO10589-ISIS.aoi areaTransmitPassword (111)}; areaReceivePasswords ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.Passwords; MATCHES FOR Equality; BEHAVIOUR areaReceivePasswords-B BEHAVIOUR DEFINED AS The values to be used as receive pass words to check the receipt of Level 1 LSP, and SNP PDUs;; REGISTERED AS {ISO10589-ISIS.aoi areaReceivePasswords (112)}; domainTransmitPassword ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.Password; MATCHES FOR Equality; BEHAVIOUR domainTransmitPassword-B BEHAVIOUR DEFINED AS The value to be used as a transmit pass word in Level 2 LSP, and SNP PDUs transmitted by this Intermediate System;; REGISTERED AS {ISO10589-ISIS.aoi domainTransmitPassword (113)}; domainReceivePasswords ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.Passwords; MATCHES FOR Equality; BEHAVIOUR domainReceivePasswords-B BEHAVIOUR DEFINED AS The values to be used as receive pass words to check the receipt of Level 2 LSP, and SNP PDUs;; REGISTERED AS {ISO10589-ISIS.aoi domainReceivePasswords (114)}; circuitTransmitPassword ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.Password; MATCHES FOR Equality; BEHAVIOUR circuitTransmitPassword-B BEHAVIOUR DEFINED AS The value to be used as a transmit pass word in IIH PDUs transmitted by this Intermediate System;; REGISTERED AS {ISO10589-ISIS.aoi circuitTransmitPassword (115)}; circuitReceivePasswords ATTRIBUTE WITH ATTRIBUTE SYNTAX ISO10589-ISIS.Passwords; MATCHES FOR Equality; BEHAVIOUR circuitReceivePasswords-B BEHAVIOUR DEFINED AS The values to be used as receive pass words to check the receipt of IIH PDUs;; REGISTERED AS {ISO10589-ISIS.aoi circuitReceivePasswords (116)}; authenticationFailures ATTRIBUTE DERIVED FROM nonWrappingCounter; BEHAVIOUR authenticationFailures-B BEHAVIOUR DEFINED AS Count of authentication Failure notifica tions generated;; REGISTERED AS {ISO10589-ISIS.aoi authenticationFailures (117)}; 11.2.12 Notification Definitions -- Note pduFormatError notification now included in Network layer definitions corruptedLSPDetected NOTIFICATION BEHAVIOUR corruptedLSPDetected-B BEHAVIOUR DEFINED AS The Corrupted LSP Detected Notifica tion is generated when a corrupted Link State PDU is detected in memory. The occurance of this event is counted by the corruptedLSPsDetected counter.;; MODE NON-CONFIRMED; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi corruptedLSPDetected (1)}; lSPL1DatabaseOverload NOTIFICATION BEHAVIOUR lSPL1DatabaseOverload-B BEHAVIOUR DEFINED AS The LSP L1 Database Overload Notifi cation is generated when the l1State of the system changes between On and Waiting or Waiting and On. The stateChange argument is set to indicate the resulting state, and in the case of Waiting the sour ceID is set to indicate the source of the LSP which precipitated the overload. The occurance of this event is counted by the lSPL1DatabaseOverloads counter.;; MODE NON-CONFIRMED; PARAMETERS notificationOverloadStateChange, notificationSourceID; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi lSPL1DatabaseOverload (2)}; manualAddressDroppedFromArea NOTIFICATION BEHAVIOUR manualAddressDroppedFromArea-B BEHAVIOUR DEFINED AS The Manual Address Dropped From Area Notification is generated when one of the man ualAreaAddresses (specified on this system) is ig nored when computing partitionAreaAddresses or areaAddresses because there are more than Maximu mAreaAddresses distinct Area Addresses. The areaAddress argument is set to the ignored Area Ad dress. It is generated once for each Area Address in manualAreaAddresses which is dropped. It is not logged again for that Area Address until after it has been reinstated into areaAddresses (i.e. it is only the action of dropping the Area Address and not the state of being dropped, which causes the event to be generated). The occurance of this event is counted by the manualAddressDroppedFromAreas counter.;; MODE NON-CONFIRMED; PARAMETERS notificationAreaAddress; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi manualAddressDroppedFromArea (3)}; attemptToExceedMaximumSequenceNumber NOTIFICATION BEHAVIOUR attemptToExceedMaximumSequenceNumber-B BEHAVIOUR DEFINED AS The Attempt To Exceed Maximum Se quence Number Notification is generated when an attempt is made to increment the sequence number of an LSP beyond the maximum sequence number. Following the generation of this event the operation of the Routeing state machine shall be disabled for at least (MaxAge + ZeroAgeLifetime) seconds. The occurance of this event is counted by the attemptsToExceedMaximumSequenceNumber counter.;; MODE NON-CONFIRMED; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi attemptToExceedMaximumSequenceNumber (4)}; sequenceNumberSkip NOTIFICATION BEHAVIOUR sequenceNumberSkip-B BEHAVIOUR DEFINED AS The Sequence Number Skipped Notifi cation is generated when the sequence number of an LSP is incremented by more than one. The occur ance of this event is counted by the sequenceNum berSkips counter.;; MODE NON-CONFIRMED; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi sequenceNumberSkip (5)}; ownLSPPurge NOTIFICATION BEHAVIOUR ownLSPPurge-B BEHAVIOUR DEFINED AS The Own LSP Purged Notification is generated when a zero aged copy of a system's own LSP is received from some other system. This repre sents an erroneous attempt to purge the local sys tem's LSP. The occurance of this event is counted by the ownLSPPurges counter.;; MODE NON-CONFIRMED; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi ownLSPPurge (6)}; partitionVirtualLinkChange NOTIFICATION BEHAVIOUR partitionVirtualLinkChange-B BEHAVIOUR DEFINED AS The Partition Virtual Link Change Noti fication is generated when a virtual link (for the pur poses of Level 1 partition repair) is either created or deleted. The relative order of events relating to the same Virtual Link must be preserved. The occur ance of this event is counted by the partitionVirtual LinkChanges counter.;; MODE NON-CONFIRMED; PARAMETERS notificationVirtualLinkChange, notificationVirtualLinkAddress; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi partitionVirtualLinkChange (7)}; lSPL2DatabaseOverload NOTIFICATION BEHAVIOUR lSPL2DatabaseOverload-B BEHAVIOUR DEFINED AS The LSP L2 Database Overload Notifi cation is generated when the l2State of the system changes between On and Waiting or Waiting and On. The stateChange argument is set to indicate the resulting state, and in the case of Waiting the sour ceID is set to indicate the source of the LSP which precipitated the overload. The occurance of this event is counted by the lSPL2DatabaseOverloads counter.;; MODE NON-CONFIRMED; PARAMETERS notificationOverloadStateChange, notificationSourceID; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi lSPL2DatabaseOverload (8)}; iDFieldLengthMismatch NOTIFICATION BEHAVIOUR iDFieldLengthMismatch-B BEHAVIOUR DEFINED AS The iDFieldLengthMismatch Notifica tion is generated when a PDU is received with a dif ferent value for ID field length to that of the receiving Intermediate system. The occurance of this event is counted by the iDFieldLengthMismatches counter.;; MODE NON-CONFIRMED; PARAMETERS notificationIDLength, notificationSourceID; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi iDFieldLengthMismatch (9)}; circuitChange NOTIFICATION BEHAVIOUR circuitChange-B BEHAVIOUR DEFINED AS The Circuit Change Notification is gen erated when the state of the Circuit changes from On to Off or from Off to On. The relative order of events relating to the same Circuit must be pre served. The occurance of this event is counted by the circuitChanges counter.;; MODE NON-CONFIRMED; PARAMETERS notificationNewCircuitState; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi circuitChange (10)}; adjacencyStateChange NOTIFICATION BEHAVIOUR adjacencyStateChange-B BEHAVIOUR DEFINED AS The Adjacency State Change Notifica tion is generated when the state of an Adjacency on the Circuit changes from Up to Down or Down to Up (in the latter case the Reason argument is omit ted). For these purposes the states Up and Up/dormant are considered to be Up, and any other state is considered to be Down. The relative order of events relating to the same Adjacency must be pre served. The occurance of this event is counted by the adjacencyStateChanges counter.;; MODE NON-CONFIRMED; PARAMETERS notificationAdjacentSystem, notificationNewAdjacencyState, notificationReason, notificationPDUHeader, notificationCalledAddress, notificationVersion; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi adjacencyStateChange (11)}; initializationFailure NOTIFICATION BEHAVIOUR initializationFailure-B BEHAVIOUR DEFINED AS The Initialisation Failure Notification is generated when an attempt to initialise with an adja cent system fails as a result of either Version Skew or Area Mismatch. In the case of Version Skew, the Adjacent system argument is not present. The oc curance of this event is counted by the initialization Failures counter.;; MODE NON-CONFIRMED; PARAMETERS notificationAdjacentSystem, notificationReason, notificationPDUHeader, notificationCalledAddress, notificationVersion; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi initializationFailure (12)}; rejectedAdjacency NOTIFICATION BEHAVIOUR rejectedAdjacency-B BEHAVIOUR DEFINED AS The Rejected Adjacency Notification is generated when an attempt to create a new adja cency is rejected, because of a lack of resources. The occurance of this event is counted by the reject edAdjacencies counter.;; MODE NON-CONFIRMED; PARAMETERS notificationAdjacentSystem, notificationReason, notificationPDUHeader, notificationCalledAddress, notificationVersion; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi rejectedAdjacency (13)}; lanL1DesignatedIntermediateSystemChange NOTIFICATION BEHAVIOUR lanL1DesignatedIntermediateSystemChange-B BEHAVIOUR DEFINED AS The LAN L1 Designated Intermediate System Change Notification is generated when the local system either elects itself or resigns as being the LAN L1 Designated Intermediate System on this circuit. The relative order of these events must be preserved. The occurance of this event is counted by the lanL1DesignatedIntermediateSystemChanges counter.;; MODE NON-CONFIRMED; PARAMETERS notificationDesignatedIntermediateSystemChange; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi lanL1DesignatedIntermediateSystemChange (14)}; exceededMaximumSVCAdjacencies NOTIFICATION BEHAVIOUR exceededMaximumSVCAdjacencies-B BEHAVIOUR DEFINED AS The Exceeded Maximum SVC Adjacen cies Notification is generated when there is no free adjacency on which to establish an SVC for a new destination.(see clause 8.3.2.3) The occurance of this event is counted by the timesExceededMaximumSVCAdjacencies counter.;; MODE NON-CONFIRMED; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi exceededMaximumSVCAdjacencies (15)}; exceededMaximumCallAttempts NOTIFICATION BEHAVIOUR exceededMaximumCallAttempts-B BEHAVIOUR DEFINED AS The Exceeded Maximum Call Attempts Notification is generated when recallCount becomes equal to maximumCallAttempts. The occurance of this event is counted by the timesExceededMaxi mumCallAttempts counter.;; MODE NON-CONFIRMED; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi exceededMaximumCallAttempts (16)}; lanL2DesignatedIntermediateSystemChange NOTIFICATION BEHAVIOUR lanL2DesignatedIntermediateSystemChange-B BEHAVIOUR DEFINED AS The LAN L2 Designated Intermediate System Change Notification is generated when the local system either elects itself or resigns as being the LAN L2 Designated Intermediate System on this circuit. The relative order of these events must be preserved. The occurance of this event is counted by the lanL2DesignatedIntermediateSystemChanges counter.;; MODE NON-CONFIRMED; PARAMETERS notificationDesignatedIntermediateSystemChange; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi lanL2DesignatedIntermediateSystemChange (17)}; authenticationFailure NOTIFICATION BEHAVIOUR authenticationFailure-B BEHAVIOUR DEFINED AS Generated when a PDU is received with an incorrect Authentication information field;; MODE NON-CONFIRMED; PARAMETERS notificationAdjacentSystem; WITH INFORMATION SYNTAX ISO10589-ISIS.NotificationInfo; REGISTERED AS {ISO10589-ISIS.noi authenticationFailure (18)}; 11.2.13 Action Definitions -- Note: The following actions have been proposed (in SC21 N4977) for inclusion in DMI. Until such time as this is completed, the definitions of these actions are given here. -- activate ACTION BEHAVIOUR activate-B BEHAVIOUR DEFINED AS Sets OperationalState to `enabled' and commences operation;; MODE CONFIRMED; PARAMETERS successResponse, failureResponse, failureReason; WITH INFORMATION SYNTAX ISO10589-ISIS.ActionInfo; WITH REPLY SYNTAX ISO10589-ISIS.ActionReply; REGISTERED AS {ISO10589-ISIS.acoi activate (1)}; deactivate ACTION BEHAVIOUR deactivate-B BEHAVIOUR DEFINED AS Sets OperationalState to `disabled' and ceases operation;; MODE CONFIRMED; PARAMETERS successResponse, failureResponse, failureReason; WITH INFORMATION SYNTAX ISO10589-ISIS.ActionInfo; WITH REPLY SYNTAX ISO10589-ISIS.ActionReply; REGISTERED AS {ISO10589-ISIS.acoi deactivate (2)}; 11.2.14 Parameter Definitions iSO10589-NB-p1 PARAMETER CONTEXT CREATE-INFO; WITH SYNTAX ISO10589-ISIS.ISType; BEHAVIOUR iSO10589-NB-p1-B BEHAVIOUR DEFINED AS The value to be given to the iStype at tribute on MO creation. This parameter is manda tory;; REGISTERED AS {ISO10589-ISIS.proi iSO10589-NB-p1 (1)}; iSO10589Circuit-MO-p1 PARAMETER CONTEXT CREATE-INFO; WITH SYNTAX ISO10589-ISIS.CircuitType; BEHAVIOUR iSO10589Circuit-MO-p1-B BEHAVIOUR DEFINED AS The value to be given to the type attrib ute on MO creation. This parameter is mandatory;; REGISTERED AS {ISO10589-ISIS.proi iSO10589Circuit-MO-p1 (2)}; reachableAddressP1 PARAMETER CONTEXT CREATE-INFO; WITH SYNTAX ISO10589-ISIS.AddressPrefix; BEHAVIOUR reachableAddressp1-B BEHAVIOUR DEFINED AS The value to be given to the addressPre fix attribute on MO creation. This parameter is man datory;; REGISTERED AS {ISO10589-ISIS.proi reachableAddressP1 (3)}; reachableAddressP2 PARAMETER CONTEXT CREATE-INFO; WITH SYNTAX ISO10589-ISIS.MappingType; BEHAVIOUR reachableAddressp2-B BEHAVIOUR DEFINED AS The value to be given to the map pingType attribute on MO creation. This parameter is only permitted when the `type' of the parent cir cuit is either `broadcast' or `DA'. In those cases the default value is `manual';; REGISTERED AS {ISO10589-ISIS.proi reachableAddressP2 (4)}; manualAdjacencyP1 PARAMETER CONTEXT CREATE-INFO; WITH SYNTAX ISO10589-ISIS.LANAddress; BEHAVIOUR manualAdjacencyP1-B BEHAVIOUR DEFINED AS The value to be given to the lANAd dress attribute on MO creation;; REGISTERED AS {ISO10589-ISIS.proi manualAdjacencyP1 (5)}; manualAdjacencyP2 PARAMETER CONTEXT CREATE-INFO; WITH SYNTAX ISO10589-ISIS.EndSystemIDs; BEHAVIOUR manualAdjacencyP2-B BEHAVIOUR DEFINED AS The value to be given to the endSys temIDs attribute on MO creation;; REGISTERED AS {ISO10589-ISIS.proi manualAdjacencyP2 (6)}; successResponse PARAMETER CONTEXT ACTION-REPLY; WITH SYNTAX ISO10589-ISIS.ResponseCode; BEHAVIOUR successResponse-B BEHAVIOUR DEFINED AS Returned in the responseCode field of an ActionReply when the action has completed suc cessfully.;; REGISTERED AS {ISO10589-ISIS.proi successResponse (7)}; failureResponse PARAMETER CONTEXT ACTION-REPLY; WITH SYNTAX ISO10589-ISIS.ResponseCode; BEHAVIOUR failureResponse-B BEHAVIOUR DEFINED AS Returned in the responseCode field of an ActionReply when the action failed to complete. The failureReason parameter is returned with this re sponseCode, giving additional information;; REGISTERED AS {ISO10589-ISIS.proi failureResponse (8)}; failureReason PARAMETER CONTEXT ACTION-REPLY; WITH SYNTAX ISO10589-ISIS.ActionFailureReason; BEHAVIOUR failureReason-B BEHAVIOUR DEFINED AS Gives the reason why an entity failed to activate or deactivate.;; REGISTERED AS {ISO10589-ISIS.proi failureReason (9)}; constraintViolation PARAMETER CONTEXT SPECIFIC-ERROR; WITH SYNTAX ISO10589-ISIS.ConstraintViolationReason; BEHAVIOUR constraintViolation-B BEHAVIOUR DEFINED AS The specific error returned on failure of a REPLACE operation when the MO prohibits such operations under certain conditions, for example while the MO is in the disabled operational state.;; REGISTERED AS {ISO10589-ISIS.proi constraintViolation (10)}; notificationReceivingAdjacency PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.LocalDistinguishedName; BEHAVIOUR notificationReceivingAdjacency-B BEHAVIOUR DEFINED AS The local managed object name of the adjacency upon which the NPDU was received;; REGISTERED AS {ISO10589-ISIS.proi notificationReceivingAdjacency (11)}; notificationIDLength PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.IDLength; BEHAVIOUR notificationIDLength-B BEHAVIOUR DEFINED AS The IDLength specified in the ignored PDU;; REGISTERED AS {ISO10589-ISIS.proi notificationIDLength (12)}; notificationAreaAddress PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.AreaAddress; BEHAVIOUR notificationAreaAddress-B BEHAVIOUR DEFINED AS The Area Address which caused Maxi mumAreaAddresses to be exceeded;; REGISTERED AS {ISO10589-ISIS.proi notificationAreaAddress (13)}; notificationSourceID PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.SourceID; BEHAVIOUR notificationSourceID-B BEHAVIOUR DEFINED AS The source ID of the LSP;; REGISTERED AS {ISO10589-ISIS.proi notificationSourceID (14)}; notificationVirtualLinkChange PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.VirtualLinkChange; BEHAVIOUR notificationVirtualLinkChange-B BEHAVIOUR DEFINED AS This indicates whether the event was genrated as a result of the creation or deletion of a Virtual Link between two Level 2 Intermediate Sys tems.;; REGISTERED AS {ISO10589-ISIS.proi notificationVirtualLinkChange (15)}; notificationVirtualLinkAddress PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.NetworkEntityTitle; BEHAVIOUR notificationVirtualLinkAddress-B BEHAVIOUR DEFINED AS The Network Entity Title of the Level 2 Intermediate System at the remote end of the virtual link;; REGISTERED AS {ISO10589-ISIS.proi notificationVirtualLinkAddress (16)}; notificationNewCircuitState PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.NewCircuitState; BEHAVIOUR notificationNewCircuitState-B BEHAVIOUR DEFINED AS The direction of the Circuit state change specified as the resulting state. i.e. a change from On to Off is specified as Off;; REGISTERED AS {ISO10589-ISIS.proi notificationNewCircuitState (17)}; notificationNewAdjacencyState PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.NewAdjacencyState; BEHAVIOUR notificationNewAdjacencyState-B BEHAVIOUR DEFINED AS The direction of the Adjacency state change specified as the resulting state. i.e. a change from Up to Down is specified as Down. Any state other than Up is considered to be Down.;; REGISTERED AS {ISO10589-ISIS.proi notificationNewAdjacencyState (18)}; notificationAdjacentSystem PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.SystemID; BEHAVIOUR notificationAdjacentSystem-B BEHAVIOUR DEFINED AS The system ID of the adjacent system;; REGISTERED AS {ISO10589-ISIS.proi notificationAdjacentSystem (19)}; notificationReason PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.Reason; BEHAVIOUR notificationReason-B BEHAVIOUR DEFINED AS The associated Reason;; REGISTERED AS {ISO10589-ISIS.proi notificationReason (20)}; notificationPDUHeader PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.PDUHeader; BEHAVIOUR notificationPDUHeader-B BEHAVIOUR DEFINED AS The header of the PDU which caused the notification;; REGISTERED AS {ISO10589-ISIS.proi notificationPDUHeader (21)}; notificationCalledAddress PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.SNPAAddress; BEHAVIOUR notificationCalledAddres-B BEHAVIOUR DEFINED AS The SNPA Address which was being called when the Adjacency was taken down as a re sult of a call reject;; REGISTERED AS {ISO10589-ISIS.proi notificationCalledAddress (22)}; notificationVersion PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.Version; BEHAVIOUR notificationVersion-B BEHAVIOUR DEFINED AS The version number reported by the other system;; REGISTERED AS {ISO10589-ISIS.proi notificationVersion (23)}; notificationDesignatedIntermediateSystemChange PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.DesignatedISChange; BEHAVIOUR notificationDesignatedIntermediateSystemChange-B BEHAVIOUR DEFINED AS The direction of the change in Desig nated Intermediate System status of this system;; REGISTERED AS {ISO10589-ISIS.proi notificationDesignatedIntermediateSystemChange (24)}; notificationOverloadStateChange PARAMETER CONTEXT EVENT-INFO; WITH SYNTAX ISO10589-ISIS.OverloadStateChange; BEHAVIOUR notificationOverloadStateChange-B BEHAVIOUR DEFINED AS The direction of the change in Overload status;; REGISTERED AS {ISO10589-ISIS.proi notificationOverloadStateChange (25)}; 11.2.15 Attribute Groups counters ATTRIBUTE GROUP DESCRIPTION The group of all counters; REGISTERED AS {ISO10589-ISIS.agoi counters (1)}; 11.2.16 Behaviour Definitions resettingTimer-B BEHAVIOUR DEFINED AS This attribute specifies the interval be tween certain events in the operation of the protocol state machine. If the value of this attribute is changed to a new value t while the protocol state machine is in operation, the implementation shall take the necessary steps to ensure that for any time interval which was in progress when the correspond ing attribute was changed, the next expiration of the that interval takes place t seconds from the original start of that interval, or immediately, whichever is later. The precision with which this time shall be im plemented shall be the same as that associated with the basic operation of the timer attribute; replaceOnlyWhileDisabled-B BEHAVIOUR DEFINED AS This attribute shall only permit the RE PLACE operation to be performed on it while the MO is in the Disabled Operational State. An at tempt to perform a REPLACE operation while the MO is in the Enabled Operation State shall fail with the generation of the constraintViolation specific er ror.; resourceLimiting-B BEHAVIOUR DEFINED AS This attribute places limits on some re source". In general implementations may allocate reources up to this limit when the managed object is enabled and it may be impossible to change the allo cation without first disabling and re-enabling the managed object. Therefore this International Stan dard only requires that it shall be possible to perform a REPLACE operation on this attribute while the MO is disabled. However some implementations may be able to to change the allocation of resources without first disabling the MO. In this case it is per mitted to increase the value of the atribute at any time, but it shall not be decreased below the cur rently used" value of the resource. Where an at tempt to perform a REPLACE operation fails either because the MO is enabled, or because an attempt has been made to decrease the value, the REPLACE operation shall fail with the generation of the con straintViolation specific error.; 11.2.17 ASN1 Modules ISO10589-ISIS{tbd1} DEFINITIONS ::= BEGIN -- object identifier definitions sc6 OBJECT IDENTIFIER ::= {joint-iso-ccitt sc6(?)} -- value to be assigned by SC21 secretariat isisoi OBJECT IDENTIFIER ::= {sc6 iSO10589(?)} -- value to be assigned by SC6 secretariat moi OBJECT IDENTIFIER ::= {isisoi objectClass (3)} poi OBJECT IDENTIFIER ::= {isisoi package (4)} proi OBJECT IDENTIFIER ::= {isisoi parameter (5)} nboi OBJECT IDENTIFIER ::= {isisoi nameBinding (6)} aoi OBJECT IDENTIFIER ::= {isisoi attribute (7)} agoi OBJECT IDENTIFIER ::= {isisoi attributeGroup (8)} acoi OBJECT IDENTIFIER ::= {isisoi action (10)} noi OBJECT IDENTIFIER ::= {isisoi notification (11)} ActionFailureReason ::= ENUMERATED{ reason1(0), reason2(1)} -- Note: actual reasons TBS ActionInfo ::= SET OF Parameter ActionReply ::= SEQUENCE{ responseCode OBJECT IDENTIFIER, responseArgs SET OF Parameter OPTIONAL} AddressPrefix ::= OCTETSTRING(SIZE(0..20)) AdjacencyState ::= ENUMERATED{ initializing(0), up(1), failed(2)}-- was 4 in N5821 , is it required at all? AreaAddress ::= OCTETSTRING(SIZE(1..20)) AreaAddresses ::= SET OF AreaAddress Boolean ::= BOOLEAN CircuitID ::= OCTETSTRING(SIZE(1..10)) CompleteSNPInterval ::= INTEGER(1..600) ConstraintViolationReason ::= OBJECT IDENTIFIER; DRISISHelloTimer ::= INTEGER(1..65535) DatabaseState ::= ENUMERATED{ off(0), on(1), waiting(2)} DesignatedISChange ::= ENUMERATED{ resigned(0), elected(1)} DefaultESHelloTimer ::= INTEGER(1..65535) EndSystemIDs ::= SET OF SystemID GraphicString ::= GRAPHICSTRING HelloTimer ::= INTEGER(1..65535) HoldingTimer ::= INTEGER(1..65535) HopMetric ::= INTEGER(0..63) ISISHelloTimer ::= INTEGER(1..65535) IDLength ::= INTEGER(0..9) IdleTimer ::= INTEGER(1..65535) InitialMinimumTimer ::= INTEGER(1..65535) IntermediateSystemPriority ::= INTEGER(1..127) ISType ::= ENUMERATED{ level1IS(1), level2IS(2)} LANAddress ::= OCTETSTRING(SIZE(6)) AdjacencyUsageType::= ENUMERATED{ undefined(0), level1(1), level2(2), level1and2(3)} LocalDistinguishedName ::= CMIP-1.ObjectInstance -- A suitable free standing definition is requred LSPID ::= OCTETSTRING(SIZE(2..11)) MappingType ::= ENUMERATED{ manual(0), x121(1)} MaximumBuffers ::= INTEGER(1..65535) MaximumCallAttempts ::= INTEGER(1..65535) MaximumLSPGenerationInterval ::= INTEGER(1..65535) MaximumPathSplits ::= INTEGER(1..32) MaximumSVCAdjacencies ::= INTEGER(1..65535) MaximumVirtualAdjacencies ::= INTEGER(0..32) MetricIncrement ::= INTEGER(0..63) MetricType ::= ENUMERATED{ internal(0), external(1)} MinimumBroadcastLSPTransmissionInterval ::= INTEGER(1..65535) MinimumLSPGenerationInterval ::= INTEGER(1..65535) MinimumLSPTransmissionInterval ::= INTEGER(1..65535) NeighbourSystemType ::= ENUMERATED{ unknown(0), endSystem(1), intermediateSystem(2), l1IntermediateSystem(3), l2IntermediateSystem(4)} NetworkEntityTitle ::= OCTETSTRING(SIZE(1..19)) NewAdjacencyState ::= ENUMERATED{ down(0), up(1)} NewCircuitState ::= ENUMERATED{ off(0), on(1)} NonWrappingCounter ::= INTEGER(0..264-1) NotificationInfo ::= SET OF Parameter NSAPAddress ::= OCTETSTRING(SIZE(1..20)) OctetString ::= OCTETSTRING OriginatingLSPBufferSize ::= INTEGER(512..1492) OutputAdjacencies ::= SET OF LocalDistinguishedName OverloadStateChange ::= ENUMERATED{ on(0), waiting(1)} Parameter ::= SEQUENCE{ paramIdOBJECT IDENTIFIER, paramInfoANY DEFINED BY paramID} PartialSNPInterval ::= INTEGER(1..65535) Password ::= OCTETSTRING(SIZE(0..254) Passwords ::= SET OF Password PathMetric ::= INTEGER(0..1023) PDUHeader ::= OCTETSTRING(SIZE(0..255)) PollESHelloRate ::= INTEGER(1..65535) Reason ::= ENUMERATED{ holdingTimerExpired(0), checksumError(1), oneWayConnectivity(2), callRejected(3), reserveTimerExpired(4), circuitDisabled(5), versionSkew(6), areaMismatch(7), maximumBroadcastIntermediateSystemsExceeded(8), maximumBroadcastEndSystemsExceeded(9), wrongSystemType(10)} ResponseCode ::= OBJECT IDENTIFIER RecallTimer ::= INTEGER(1..65535) ReserveTimer ::= INTEGER(1..65535) SNPAAddress ::= NUMERICSTRING(FROM("0"|"1"|"2"|"3"|"4"|"5"| "6"|"7"|"8"|"9"))(SIZE(0..15)) -- Up to 15 Digits 0..9 SNPAAddresses ::= SET OF SNPAAddress CircuitType ::= ENUMERATED{ broadcast(0), ptToPt(1), staticIN(2), staticOut(3), dA(4)} SourceID ::= OCTETSTRING(SIZE(1..10)) SystemID ::= OCTETSTRING(SIZE(0..9)) VirtualLinkChange ::= ENUMERATED{ deleted(0), created(1)} Version ::= GRAPHICSTRING WaitingTime ::= INTEGER(1..65535) maximumPathSplits-Default INTEGER ::= 2 MaximumPathSplits-Permitted ::= INTEGER(1..32) maximumBuffers-Default INTEGER ::= ImpSpecific MaximumBuffers-Permitted ::= INTEGER(1..ImpSpecific) minimumLSPTransmissionInterval-Default INTEGER ::= 5 MinimumLSPTransmissionInterval-Permitted ::= INTEGER(5..30) maximumLSPGenerationInterval-Default INTEGER ::= 900 MaximumLSPGenerationInterval-Permitted ::= INTEGER(60..900) minimumBroadcastLSPTransmissionInterval-Default INTEGER ::=33 MinimumBroadcastLSPTransmissionInterval-Permitted ::= INTEGER(1..65535) completeSNPInterval-Default INTEGER ::= 10 CompleteSNPInterval-Permitted ::= INTEGER(1..600) originatingL1LSPBufferSize-Default INTEGER ::= receiveLSPBufferSize OriginatingL1LSPBufferSize-Permitted ::= INTEGER(512..receiveLSPBufferSize) manualAreaAddresses-Default AreaAddresses ::= {} ManualAreaAddresses-Permitted ::= AreaAddresses (SIZE(0..MaximumAreaAddresses)) minimumLSPGenerationInterval-Default INTEGER ::= 30 MinimumLSPGenerationInterval-Permitted ::= INTEGER(5..300) defaultESHelloTime-Default INTEGER ::= 600 DefaultESHelloTime-Permitted ::= INTEGER(1..65535) pollESHelloRate-Default INTEGER ::= 50 PollESHelloRate-Permitted ::= INTEGER(1..65535) partialSNPInterval-Default INTEGER ::= 2 PartialSNPInterval-Permitted ::= INTEGER(1..65535) waitingTime-Default INTEGER ::= 60 WaitingTime-Permitted ::= INTEGER(1..65535) dRISISHelloTimer-Default INTEGER ::= 1 DRISISHelloTimer-Permitted ::= INTEGER(1..65535) originatingL2LSPBufferSize-Default INTEGER ::= receiveLSPBufferSize OriginatingL2LSPBufferSize-Permitted ::= INTEGER(512..receiveLSPBufferSize) maximumVirtualAdjacencies-Default INTEGER ::= 2 MaximumVirtualAdjacencies-Permitted ::= INTEGER(0..32) helloTimer-Default INTEGER ::= 10 HelloTimer-Permitted ::= INTEGER(1..21845) defaultMetric-Default INTEGER ::= 20 DefaultMetric-Permitted ::= INTEGER(1..MaxLinkMetric) optionalMetric-Default INTEGER ::= 0 OptionalMetric-Permitted ::= INTEGER(0..MaxLinkMetric) metricType-Default MetricType ::= Internal iSISHelloTimer-Default INTEGER ::= 3 ISISHelloTimer-Permitted ::= INTEGER(1..21845) externalDomain-Default BOOLEAN ::= TRUE l1IntermediateSystemPriority-Default INTEGER ::= 64 L1IntermediateSystemPriority-Permitted ::= INTEGER(1..127) callEstablishmentMetricIncrement-Default INTEGER ::= 0 CallEstablishmentMetricIncrement-Permitted ::= INTEGER(0..MaxLinkMetric) idleTimer-Default INTEGER ::= 30 IdleTimer-Permitted ::= INTEGER(0..65535) initialMinimumTimer-Default INTEGER ::= 55 InitialMinimumTimer-Permitted ::= INTEGER(1..65535) reserveTimer-Default INTEGER ::= 600 ReserveTimer-Permitted ::= INTEGER(1..65535) maximumSVCAdjacencies-Default INTEGER ::= 1 MaximumSVCAdjacencies-Permitted ::= INTEGER(1..65535) reservedAdjacency-Default BOOLEAN ::= FALSE neighbourSNPAAddress-Default INTEGER ::= 0 recallTimer-Default INTEGER ::= 60 RecallTimer-Permitted ::= INTEGER(0..65535) maximumCallAttempts-Default INTEGER ::= 10 MaximumCallAttempts-Permitted ::= INTEGER(0..255) manualL2OnlyMode-Default BOOLEAN ::= FALSE l2IntermediateSystemPriority-Default INTEGER ::= 64 L2IntermediateSystemPriority-Permitted ::= INTEGER(1..127) lANAddress-Default LANAddress ::= 000000000000 sNPAAddresses-Default SNPAAddresses::= {} password-Default Password ::= {} passwords-Default Passwords ::= {} -- The empty set END
12 Conformance 12.1 Static Conformance Requirements 12.1.1 Protocol Implementation Conformance Statement A Protocol Implementation Conformance Statement (PICS) shall be completed in respect of any claim for conformance of an implementation to this International Standard: the PICS shall be produced in accordance with the relevant PICS pro-forma in Annex A. 12.1.2 Static Conformance for all ISs A system claiming conformance to this International Stan dard shall be capable of: a)calculating a single minimum cost route to each desti nation according to 7.2.6 for the default metric speci fied in 7.2.2; b)utilising Link State information from a system only when an LSP with LSP number 0 and remaining life time>0 is present according to 7.2.5; c)removing excess paths according to 7.2.7 d)performing the robustness checks according to 7.2.8; e)constructing a forwarding database according to 7.2.9; f)if (and only if) Area Partition Repair is supported, 1)performing the operations according to 7.2.10; 2)performing the encapsulation operations in the for warding process according to 7.4.3.2; and 3)performing the decapsulation operations in the re ceive process according to 7.4.4; TEMPORARY NOTE may need to reor ganise clause 7.4.4 in order to make it crystal clear what is required in the receive process in the presence/absence of partition repair g)computing area addresses according to 7.2.11; h)generating local Link State information as required by 7.3.2; i)including information from Manual Adjacencies ac cording to 7.3.3.1; j)if (and only if) Reachable Addresses are supported, in cluding information from Reachable Addresses ac cording to 7.3.3.2; k)generating multiple LSPs according to 7.3.4; l)generating LSPs periodically according to 7.3.5; m)generating LSPs on the occurrence of events accord ing to 7.3.6; n)generating an LSP checksum according to 7.3.11; o)operating the Update Process according to 7.3.12 7.3.17 including controlling the rate of LSP transmis sion only for each broadcast circuit (if any) according to 7.3.15.6; p)operating the LSP database overload procedures ac cording to 7.3.19.1; q)selecting the appropriate forwarding database accord ing to 7.4.2; r)forwarding ISO 8473 PDUs according to 7.4.3.1 and 7.4.3.3; s)operating the receive process according to 7.4.4; TEMPORARY NOTE item 1 of the second bulleted list is only required if you implement partition repair. We need to reorganise the structure so we can pull this out. t)performing on each supported Point-to-Point circuit (if any): 1)forming and maintaining adjacencies according to 8.2; u)performing on each supported ISO 8208 circuit (if any) 1)SVC establishment according to 8.3.2.1 using the network layer protocols according to 8.3.1; 2)If Reachable Addresses are supported, the opera tions specified in 8.3.2.2 8.3.5.6. 3)If call

Estab

lish

ment

Met

ricIncrement greater than zero are supported, the operations specified in 8.3.5.3. 4)If the Reverse Path Cache is supported, the opera tions specified in 8.3.3 v)performing on each supported broadcast circuit (if any) 1)the pseudonode operations according to 7.2.3; 2)controlling the rate of LSP transmission according to 7.3.15.6; 3)the operations specified in 8.4.18.4.4 and 8.4.6; 4)the operations specified in 8.4.5. w)constructing and correctly parsing all PDUs according to clause 9; x)providing a system environment in accordance with clause 10; y)being managed via the system management attributes defined in clause 11. For all attributes referenced inthe normative text, the default value (if any) shall be sup ported. Other values shall be supported if referenced in a REQUIRED VALUES clause of the GDMO definition; z)If authentication procedures are implemented: 1)the authentication field processing functions of clauses 7.3.77.3.10, 7.3.15.17.3.15.4, 8.2.3 8.2.4, and 8.4.1.1; 2)the Authentication Information field of the PDU in clauses 9.59.13. 12.1.3 Static Conformance Requirements for level 1 ISs A system claiming conformance to this International Stan dard as a level 1 IS shall conform to the requirements of 12.1.2 and in addition shall be capable of a)identifying the nearest Level 2 IS according to 7.2.9.1; b)generating Level 1 LSPs according to 7.3.7; c)generating Level 1 pseudonode LSPs for each sup ported broadcast circuit (if any) according to 7.3.8; d)performing the actions in Level 1 Waiting State ac cording to 7.3.19.2 12.1.4 Static Conformance Requirements for level 2 ISs A system claiming conformance to this International Stan dard as a level 2 IS shall conform to the requirements of 12.1.2 and in addition shall be capable of a)setting the attached flag according to 7.2.9.2; b)generating Level 2 LSPs according to 7.3.9; c)generating Level 2 pseudonode LSPs for each sup ported broadcast circuit (if any) according to 7.3.10; d)performing the actions in Level 2 Waiting State ac cording to 7.3.19.3. 12.2 Dynamic Conformance 12.2.1 Receive Process Conformance Requirements Any protocol function supported shall be implemented in accordance with 7.4.4. 12.2.2 Update Process Conformance Requirements Any protocol function supported shall be implemented in accordance with 7.3 and its subclauses. Any PDU transmitted shall be constructed in accordance with the appropriate subclauses of 9. 12.2.3 Decision Process Conformance Requirements Any protocol function supported shall be implemented in accordance with 7.2 and its subclauses. 12.2.4 Forwarding Process Conformance Requirements Any protocol function supported shall be implemented in accordance with 7.4 and its subclauses. 12.2.5 Performance Requirements This International Standard requires that the following per formance criteria be met. These requirements apply regard less of other demands on the system; if an Intermediate sys tem has other tasks as well, those will only get resources not required to meet these criteria. Each Intermediate system implementation shall specify (in its PICS): a)the maximum number of other Intermediate systems it can handle. (For L1 Intermediate systems that means Intermediate systems in the area; for L2 Intermediate systems that is the sum of Intermediate systems in the area and Intermediate systems in the L2 subdomain.) Call this limit N. b)the maximum supported forwarding rate in ISO 8473 PDUs per second. 12.2.5.1 Performance requirements on the Update process The implementation shall guarantee the update process enough resources to process N LSPs per 30 seconds. (Re sources = CPU, memory, buffers, etc.) In a stable topology the arrival of a single new LSP on a circuit shall result in the propagation of that new LSP over the other circuits of the IS within one second, irrespective of the forwarding load for ISO 8473 data PDUs. 12.2.5.2 Performance requirement on the Decision process The implementation shall guarantee the decision process enough resources to complete (i.e. start to finish) within 5 seconds, in a stable topology while forwarding at the maxi mum rate. (For L2 Intermediate Systems, this applies to the two levels together, not each level separately.) 12.2.5.3 Reception and Processing of PDUs An ideal Intermediate system would be able to correctly process all PDUs, both control and data, with which it was presented, while simultaneously running the decision proc ess and responding to management requests. However, in the implementations of real Intermediate systems some compromises must be made. The way in which these com promises are made can dramatically affect the correctness of operation of the Intermediate system. The following gen eral principles apply. a)A stable topology should result in stable routes when forwarding at the maximum rated forwarding rate. b)Some forwarding progress should always be made (al beit over incorrect routes) even in the presence of a maximally unstable topology. In order to further characterise the required behaviour, it is necessary to identify the following types of traffic. a)IIH traffic. This traffic is important for maintaining In termediate system adjacencies and hence the Interme diate system topology. In order to prevent gratuitous topology changes it is essential that Intermediate sys tem adjacencies are not caused to go down errone ously. In order to achieve this no more than ISISHoldingMultiplier - 1 IIH PDUs may be dropped between any pair of Intermediate systems. A safer requirement is that no IIH PDUs are dropped. The rate of arrival of IIH PDUs is approximately con stant and is limited on Pointto-Point links to 1/iSIS

Hello

Timer and on LANs to a value of approxi mately 2(n/iSIS

Hello

Timer) + 2, where n is the number of Intermediate systems on the LAN (assum ing the worst case that they are all Level 2 Intermedi ate systems). b)ESH PDU traffic. This traffic is important for main taining End system adjacencies, and has relatively low processing latency. As with IIH PDUs, loss of End system adjacencies will cause gratuitous topology changes which will result in extra control traffic. The rate of arrival of ESH PDUs on Pointto-Point links is limited to approximately 1/Default

ES

Hello

Timer under all conditions. On LANs the background rate is approximately n/DefaultESHelloTimer where n is the number of End systems on the LAN. The maximum rate during polling is limited to ap proximately n/pollESHelloRate averaged over a pe riod of about 2 minutes. (Note that the actual peak ar rival rate over a small interval may be much higher than this.) c)LSP (and SNP) traffic. This traffic will be retransmitted indefinitely by the update process if it is dropped, so there is no requirement to be able to proc ess every received PDU. However, if a substantial proportion are lost, the rate of convergence to correct routes will be affected, and bandwidth and processing power will be wasted. On Point-to-Point links the peak rate of arrival is lim ited only by the speed of the data link and the other traffic flowing on that link. The maximum average rate is determined by the topology. On LANs the rate is limited at a first approximation to a maximum rate of 1000/min

i

mum

Broad

cast

LSP

Trans

mis

sion

Int

er

val, however it is possible that this may be multiplied by a factor of up to n, where n is the number of Intermediate systems on the LAN, for short periods. A Intermediate system shall be able to receive and process at least the former rate without loss, even if presented with LSPs at the higher rate. (i.e. it is permitted to drop LSPs, but must process at least 1000/min

i

mum

Broad

cast

LSP

Trans

mis

sion

Int

er

val per second of those presented.) The maximum background rate of LSP traffic (for a stable topology) is dependent on the maximum sup ported configuration size and the settings of maximumLSPGenerationInterval. For these pur poses the default value of 900 seconds can be as sumed. The number of LSPs per second is then very approximately (n1 + n2 +ne/x)/900 where n1 is the number of level 1 Intermediate systems, n2 the num ber of level 2 Intermediate systems, ne the number of End system IDs and x the number of ID which can be fitted into a single LSP. NOTE This gives a value around 1 per second for typical maximum configurations of: 4000 IDs 100 L1 Intermediate systems per area 400 L2 Intermediate systems. d)Data Traffic. This is theoretically unlimited and can arrive at the maximum data rate of the Pointto-Point link or LAN (for ISO 8802.3 this is 14,000 PDUs per second). In practice it will be limited by the operation of the congestion avoidance and control algorithms, but owing to the relatively slow response time of these algorithms, substantial peaks are likely to occur. An Intermediate system shall state in its PICS its maximum forwarding rate. This shall be quoted under at least the following conditions. 1)A stable topology of maximum size. 2)A maximally unstable topology. This figure shall be non-zero, but may reasonably be as low as 1 PDU per second. The following constraints must be met. a)The implementation shall be capable of receiving the maximum rate of ISH PDUs without loss whenever the following conditions hold 1)The data forwarding traffic rate averaged over any period of one second does not exceed the rate which the implementation claims to support 2)The ESH and LSP rates do not exceed the back ground (stable topology) rate. b)If it is unavoidable that PDUs are dropped, it is a goal that the order of retaining PDUs shall be as follows (i.e. It is least desirable for IIH PDUs to be dropped). 1)IIH PDUs 2)ESH PDUs 3)LSPs and SNPs 4)data PDUs. However, no class of traffic shall be completely starved. One way to achieve this is to allocate a queue of suitable length to each class of traffic and place the PDUs onto the appropriate queue as they arrive. If the queue is full the PDUs are discarded. Processor re sources shall be allocated to the queues to ensure that they all make progress with the same priorities as above. This model assumes that an implementation is capable of receiving PDUs and selecting their correct queue at the maximum possible data rate (14,000 PDUs per second for a LAN). If this is not the case, reception of data traffic at a rate greater than some limit (which must be greater than the maximum rated limit) will cause loss of some IIH PDUs even in a sta ble topology. This limit shall be quoted in the PICS if it exists. NOTE - Starting from the stable topology condition at maxi mum data forwarding rate, an increase in the arrival rate of data PDUs will initially only cause some data NPDUs to be lost. As the rate of arrival of data NPDUs is further in creased a point may be reached at which random PDUs are dropped. This is the rate which must be quoted in the PICS 12.2.5.4 Transmission Sufficient processor resources shall be allocated to the transmission process to enable it to keep pace with recep tion for each PDU type. Where prioritisation is required, the same order as for reception of PDU types applies. Annex A PICS Proforma (This annex is normative) A.1 Introduction The supplier of a protocol implementation which is claimed to conform to International Standard ISO 10589, whether as a level 1 or level 2 Intermediate system implementation, shall complete the applicable Protocol Implementation Conformance Statement (PICS) proforma. A completed PICS proforma is the PICS for the implemen tation in question. The PICS is a statement of which capa bilities and options of the protocol have been implemented. The PICS can have a number of uses, including use: -by the protocol implementor, as a check-list to reduce the risk of failure to conform to the standard through oversight; -by the supplier and acquirer or potential acquirer of the implementation, as a detailed indication of the capabilities of the implementation, stated relative to the common basis for understanding provided by the standard PICS proforma; -by the user or potential user of the implementa tion, as a basis for initially checking the possibility of interworking with another implementation (note that, while interworking can never be guaranteed, failure to interwork can often be predicted from incompatible PICS's); -by a protocol tester, as the basis for selecting appropri ate tests against which to assess the claim for conformance of the implementation. A.2 Abbreviations and Special Symbols A.2.1 Status-related symbols M mandatory O optional O.<n> optional, but support of at least one of the group of options labelled by the same numeral <n> is required. X prohibited not applicable c.<p> conditional requirement, according to condi tion <p> A.3 Instructions for Completing the PICS Proformas A.3.1 General structure of the PICS proforma The first part of the PICS proforma Implementation Identification and Protocol Summary is to be completed as indicated with the information necessary to identify fully both the supplier and the implementation. The main part of the PICS proforma is a fixed-format ques tionnaire divided into subclauses each containing a group of individual items. Answers to the questionnaire items are to be provided in the rightmost column, either by simply marking an answer to indicate a restricted choice (usually Yes or No), or by entering a value or a set or range of val ues. (Note that there are some items where two or more choices from a set of possible answers can apply: all rele vant choices are to be marked.) Each item is identified by an item reference in the first col umn; the second column contains the question to be an swered; the third column contains the reference or refer ences to the material that specifies the item in the main body of the standard. the remaining columns record the status of the item whether support is mandatory, optional or conditional and provide the space for the answers: see A.3.4 below. A supplier may also provide or be required to provide further information, categorised as either Additional Infor mation or Exception Information. When present, each kind of further information is to be provided in a further sub clause of items labelled A<i> or X<i> respectively for cross-referencing purposes, where <i> is any unambiguous identification for the item (e.g. simply a number): there are no other restrictions on its format and presentation. A completed PICS proforma, including any Additional In formation and Exception Information, is the Protocol Im plementation Conformance Statement for the implementa tion in question. NOTE - Where an implementation is capable of being con figured in more than one way, a single PICS may be able to describe all such configurations. However, the supplier has the choice of providing more than one PICS, each covering some subset of the implementation's configuration capabili ties, in case this makes for easier and clearer presentation of the information. A.3.2 Additional Information Items of Additional Information allow a supplier to provide further information intended to assist the interpretation of the PICS. It is not intended or expected that a large quantity will be supplied, and a PICS can be considered complete without any such information. Examples might be an out line of the ways in which a (single) implementation can be set up to operate in a variety of environments and configu rations. References to items of Additional information may be en tered next to any answer in the questionnaire, and may be included in items of Exception Information. A.3.3 Exception Information It may occasionally happen that a supplier will wish to an swer an item with mandatory or prohibited status (after any conditions have been applied) in a way that conflicts with the indicated requirement. No pre-printed answer will be found in the Support column for this, but the Supplier may write the desired answer into the Support column. If this is done, the supplier is required to provide an item of Excep tion Information containing the appropriate rationale, and a cross-reference from the inserted answer to the Exception item. An implementation for which an Exception item is required in this way does not conform to ISO 10589. NOTE - A possible reason for the situation described above is that a defect report is being progressed, which is expected to change the requirement that is not met by the implemen tation. A.3.4 Conditional Status A.3.4.1 Conditional items The PICS proforma contains a number of conditional items. These are items for which the status mandatory, optional or prohibited that applies is dependent upon whether or not certain other items are supported, or upon the values supported for other items. In many cases, whether or not the item applies at all is conditional in this way, as well as the status when the item does apply. Individual conditional items are indicated by a conditional symbol in the Status column as described in A.3.4.2 below. Where a group of items are subject to the same condition for applicability, a separate preliminary question about the condition appears at the head of the group, with an instruc tion to skip to a later point in the questionnaire if the Not Applicable answer is selected. A.3.4.2 Conditional symbols and conditions A conditional symbol is of the form c.<n> or c.G<n> where <n> is a numeral. For the first form, the numeral identifies a condition appearing in a list at the end of the subclause containing the item. For the second form, c.G<n>, the nu meral identifies a condition appearing in the list of global conditions at the end of the PICS. A simple condition is of the form:if <p> then <s1> else <s2> where <p> is a predicate (see A.3.4.3 below), and <s1> and <s2> are either basic status symbols (M,O,O.<n>, or X) or the symbol . An extended condition is of the formif <p1> then <s1> else <s2> else if <p2> then <s2> [else if <p3> ...] else <sn> where <p1> etc. are predicates and <s1> etc. are basic status symbols or . The status symbol applicable to an item governed by a sim ple condition is <s1> if the predicate of the condition is true, and <s2> otherwise; the status symbol applicable to an item governed by an extended condition is <si> where <pi> is the first true predicate, if any, in the sequence <p1>, <p2>..., and <sn> if no predicate is true. A.3.4.3 Predicates A simple predicate in a condition is either a)a single item reference; or b)a relation containing a comparison operator (=, <, etc.) with one (or both) of its operands being an item refer ence for an item taking numerical values as its answer. In case (a) the predicate is true if the item referred to is marked as supported, and false otherwise. In case (b), the predicate is true if the relation holds when each item refer ence is replaced by the value entered in the Support column as answer to the item referred to. Compound predicates are boolean expressions constructed by combining simple predicates using the boolean operators AND, OR and NOT, and parentheses, in the usual way. A compound predicate is true if and only if the boolean ex pression evaluates to true when the simple predicates are in terpreted as described above. Items whose references are used in predicates are indicated by an asterisk in the Item column. A.3.4.4 Answering conditional items To answer a conditional item, the predicate(s) of the condi tion is (are) evaluated as described in A.3.4.3 above, and the applicable status symbol is determined as described in A.3.4.2. If the status symbol is this indicates that the item is to be marked in this case; otherwise, the Support column is to be completed in the usual way. When two or more basic status symbols appear in a condi tion for an item, the Support column for the item contains one line for each such symbol, labelled by the relevant sym bol. the answer for the item is to be marked in the line la belled by the symbol selected according to the value of the condition (unselected lines may be crossed out for added clarity). For example, in the item illustrated below, the N/A column would be marked if neither predicate were true; the answer line labelled M: would be marked if item A4 was marked as supported, and the answer line labelled O: would be marked if the condition including items D1 and B52 applied.Item References Status N/A Support H3 Is ... supported? 42.3(d) C.1 M: Yes O: Yes No C.1if A4 then M else if D1 AND (B52 < 3) then O else A.4 Identification A.4.1 Implementation IdentificationSupplierContact point for queriesabout this PICSImplementation Name(s)and Version(s)Operating systemName(s and Version(s)Other Hardware and Operating SystemsClaimedSystem Name(s)(if different)Notes: a)Only the first three items are required for all implementations; others may be completed as appropriate in meeting the requirements for full identification. b)The terms Name and Version should be interpreted appropriately to correspond with a supplier's terminology (using, e.g., Type, Series, Model) A.4.2 Protocol Summary: ISO 10589:19xxProtocol VersionAddenda Implemented(if applicable)AmmendmentsImplementedDate of StatementHave any Exception items been required (see A.3.3)? No Yes (The answer Yes means that the implementation does not conform to ISO 10589) PICS Proforma: Item References Status N/A Support AllIS Are all basic ISIS routeing functions implemented? 12.1.2 M M: Yes C.1if L2IS then O else C.2if 8208 then O else PartitionRe pair Is Level 1 Partition Repair imple mented? 12.1.2.f C.1 O: Yes No L1IS Are Level 1 ISIS routeing functions implemented? 12.1.3 M M: Yes L2IS Are Level 2 ISIS routeing functions implemented? 12.1.4 O O: Yes No PtPt Are point-to-point circuits imple mented? 12.1.2.t O.1 O: Yes No 8208 Are ISO 8208 circuits implemented? 12.1.2.u O.1 O: Yes No LAN Are broadcast circuits implemented? 12.1.2.v O.1 O: Yes No EqualCost Paths Is computation of equal minimum cost paths implemented? 7.2.6 O O: Yes No Downstream Is computation of downstream routes implemented? 7.2.6 O O: Yes No DelayMetric Is path computation based on the delay metric implemented? 7.2.2 O O: Yes No ExpenseMet ric Is path computation based on the Ex pense metric implemented? 7.2.2 O O: Yes No Prefixes Are Reachable Address Prefixes imple mented? 12.1.2.j C.1 O: Yes No Forward

ingRate How many ISO 8473 PDUs can the im plementation forward per second? 12.2.5.1.b M PDUs/sec L2 ISCount How many Level 2 ISs does the imple mentation support? 12.2.5.1. C.1 N = call

Estab

lish

ment

Met

ricIncrement Are non-zero values of the call

Estab

lish

ment

Met

ricIncrement supported? 12.1.2.u.3 C.2 O: Yes No L1 ISCount How many Level 1 ISs does the imple mentation support? 12.2.5.1. M N = ReversePath Cache Is the 8208 Reverse Path Cache sup ported? 12.1.2.u.4 C.2 O: Yes No ErrorMetric Is path computation based on the Error metric implemented? 7.2.2 O O: Yes No ISO 10589:19xx PICS Proforma: Item References Status N/A Support C.1if L2IS then O else C.2if 8208 then O else ID field Length What values of the routeingDomain

ID

Length are supported by this imple mentation? 7.1.1 M Values = Is the value Se table by System Man

agement? Yes No PDU Authen tication Is PDU Authentication based on Pass words implemented? 12.1.2.z O O: Yes No ISO 10589:19xx (continued) Annex B Supporting Technical Material (This annex is informative) B.1 Matching of Address Prefixes The following example shows how address prefixes may be matched according to the rules defined in 7.1.4. The prefix 37-123 matches both the full NSAP addresses 37-1234::AF< and 37-123::AF< which are encoded as 3700000000001234AF< and 3700000000000123AF< respectively. This can be achieved by first converting the address to be compared to an internal decoded form (i.e. any padding, as indicated by the particular AFI, is removed), which corre sponds to the external representation of the address. The position of the end of the IDP must be marked, since it can no longer be deduced. This is done by inserting the semi- octet F after the last semi-octet of the IDP. (There can be no confusion, since the abstract syntax of the IDP is deci mal digits). Thus the examples above become in decoded form 371234FAF< and 37123FAF< and the prefix 37-123 matches as a leading sub-string of both of them. For comparison purposes the prefix is converted to the in ternal decoded form as above. B.2 Addressing and Routeing In order to ensure the unambiguous identification of Net work and Transport entities across the entire OSIE, some form of address administration is mandatory. ISO 8348/Add.2 specifies a hierarchical structure for network addresses, with a number of top-level domains responsible for administering addresses on a world-wide basis. These address registration authorities in turn delegate to sub- authorities the task of administering portions of the address space. There is a natural tendency to repeat this sub- division to a relatively fine level of granularity in order to ease the task of each sub-authority, and to assign responsi bility for addresses to the most localised administrative body feasible. This results in (at least in theory) reduced costs of address administration and reduced danger of mas sive address duplication through administrative error. Fur thermore, political factors come into play which require the creation of sub-authorities in order to give competing inter ests the impression of hierarchical parity. For example at the top level of the ISO geographic address space, every country is assigned an equally-sized portion of the address space even though some countries are small and might in practice never want to undertake administration of their own addresses. Other examples abound at lower levels of the hierarchy, where divisions of a corporation each wish to operate as an independent address assignment authority even though this is inefficient operationally and may waste monumental amounts of potential address space. If network topologies and traffic matrices aligned naturally with the hierarchical organisation of address administration authorities, this profligate use of hierarchy would pose little problem, given the large size (20 octets) of the N-address space. Unfortunately, this is not usually the case, especially at higher levels of the hierarchy. Network topologies may cross address administration boundaries in many cases, for example: -Multi-national Corporations with a backbone network that spans several countries -Community-of-interest networks, such as academic or research networks, which span organisations and ge ographies -Military networks, which follow treaty alignments rather than geographic or national administrations -Corporate networks where divisions at times operate as part of a contractor's network, such as with trade consortia or government procurements. These kinds of networks also exhibit rich internal topolo gies and large scale (105 systems), which require sophisti cated routeing technology such as that provided by this In ternational Standard. In order to deploy such networks ef fectively, a considerable amount of address space must be left over for assignment in a way which produces efficient routes without undue consumption of memory and bandwidth for routeing overhead11This is just a fancy way of saying that hierarchical routing, with its natural effect on address assignment, is a mandatory requirement for such net works. . Similarly important is the inter-connection of these net works via Inter-domain routeing technology. If all of the as signment flexibility of the addressing scheme is exhausted in purely administrative hierarchy (at the high-order end of the address) and in Intra-Domain routeing assignment (at the low end of the address) there may be little or no address space left to customise to the needs of inter-domain routing. The considerations for how addresses may be structured for the Intra- and Inter-domain cases are discussed in more de tail in the following two clauses. B.2.1 Address Structure for Intra-domain Routeing The IS-IS Intra-domain routeing protocol uses a preferred addressing scheme. There are a number of reasons the de signers of this protocol chose to specify a single address structure, rather than leaving the matter entirely open to the address assignment authorities and the routeing domain ad ministrators: a)If one address structure is very common and known a priori, the forwarding functions can be made much faster; b)If part of the address is known to be assigned locally to an end system, then the routeing can be simpler, use less memory, and be potentially faster, by not having to discriminate based on that portion of the address. c)If part of the address can be designated as globally unique by itself (as opposed to only the entire address having this property) a number of benefits accrue: 1)Errors in address administration causing duplicate addresses become much less likely 2)Automatic and dynamic NSAP address assignment becomes feasible without global knowledge or synchronisation 3)Routeing on this part of the address can be made simple and fast, since no address collisions will oc cur in the forwarding database. d)If a part of the address can be reserved for assignment purely on the basis of topological efficiency (as op posed to political or address administration ease), hier archical routeing becomes much more memory and bandwidth efficient, since the addresses and the topol ogy are in close correspondence. e)If an upper bound can be placed on the amount of ad dress space consumed by the Intra-domain routeing scheme, then the use of address space by Inter-domain routeing can be made correspondingly more flexible. The preferred address format of the Intra-domain ISIS protocol achieves these goals by being structured into two fixed-sized fields as follows shown in figure 91#ID#81Used by level 1 routeingKey:Used by level 2 routeingID SEL HO-DSP IDP IDP Initial Domain Part HO-DSP High Order Domain Specific Part ID System Identifier SEL NSAP Selector Figure 9 - Preferred Address Format below: The field marked IDP in the figure is precisely the IDP specified in ISO 8348/Add.2. The field marked HO-DSP is that portion of the DSP from ISO 8348/Add.2 whose structure, assignment, and meaning are not specified or constrained by the Intra-domain ISIS routeing protocol. However, the design presumes that the routeing domain ad ministrator has at least some flexibility in assigning a por tion of the HO-DSP field. The purpose and usage of the fields specified by the Intra-domain ISIS routeing protocol is explained in the following paragraphs. B.2.1.1 The IDP + HO-DSP Since the Intra-domain ISIS protocol is customised for op eration with ISO 8473, all addresses are specified to use the preferred binary encoding of ISO 8348/Add.2. B.2.1.2 The Selector (SEL) Field The SEL field is intended for two purposes. Its main use is to allow for multiple higher-layer entities in End systems (such as multiple transport entities) for those systems which need this capability. This allows up to 256 NSAPs in a sin gle End system. The advantage of reserving this field exclu sively for local system administration the Intra-domain routing functions need not store routeing information about, nor even look at this field. If each individual NSAP were represented explicitly in routing tables, the size of these ta bles would grow with the number of NSAPs, rather than with the number of End systems. Since Intra-domain rout ing routes to systems, explicit recording of each NSAP brings no efficiency benefit and potentially consumes large amounts of memory in the Intermediate systems. A second use for the SEL field is in Intermediate systems. Certain ISIS functions require that PDUs be encapsulated and sent to the Network Entity in an Intermediate system rather than to an NSAP and upward to a Transport entity. An example of this is the Partition Repair function of this International Standard. In order to use a level 2 path as if it were a single subnetwork in a level 1 area, PDUs are encap sulated and addressed to an IS on the other side of the parti tion11This is a gross oversimplification for the purpose of illustrating the need for the SEL field. See 7.2.10. . By reserving certain values of the SEL field in Inter mediate systems for direct addressing of Intermediate sys tem Network entities, the normal addressing and relaying functions of other Intermediate systems can be transpar ently used for such purposes. B.2.1.3 The Identifier (ID) Field The ID field is a flat, large identifier space for identifying OSI systems. The purpose of this field is to allow very fast, simple routeing to a large (but not unconstrained) number of End systems in a routeing domain. The Intra-Domain IS IS protocol uses this field for routeing within a area. While this field is only required to be unambiguous within a single area, if the values are chosen to be globally unambiguous the Intra-domain ISIS design can exploit this fact in the following ways. First, a certain amount of parallelism can be obtained dur ing relaying. An IS can be simultaneously processing the ID field along with other fields (i.e. IDP, HO-DSP). If the ID is found in the forwarding table, the IS can initiate forward ing while checking to make sure that the other fields have the expected value. Conversely, if the ID is not found the IS can assume that either the addressed NSAP is unreach able or exists only in some other area or routeing domain. In the case where the ID is not globally unique, the for warding table can indicate this fact and relaying delayed until the entire address is analysed and the route looked up. Second, a considerable savings can be obtained in manual address administration for all systems in the routeing do main. If the ID is chosen from the ISO 8802 48-bit address space, the ID is known to be globally unique. Furthermore, since LAN systems conforming to ISO 8802 often have their 48-bit MAC address stored in ROM locally, each sys tem can be guaranteed to have a globally unambiguous NET and NSAP(s) without centralised address administra tion at the area level.22Note, however, that the use of the ISO 8802 addresses does not avoid the necessity to run ISO 9542 or to maintain tables mapping NSAP addresses to MAC (i.e. SNPA) addresses on the ISO 8802 subnetwork. This is because there is no guarantee that a particular MAC address is always enabled (the LAN controller may be turned off) or that a system has only a single MAC address. This not only eliminates administra tive overhead, but also drastically reduces the possibility of duplicate NSAP addresses, which are illegal, difficult to di agnose, and often extremely difficult to isolate. An alternative to a large, flat space for the lowest level of routeing would be to hierarchically subdivide this field to allow more levels of routeing within a single routeing do main. The designers of the Intra-domain ISIS protocol considered that this would lead to an inferior routeing archi tecture, since: a)The cost of memory in the ISs was sufficiently reason able that large (e.g. 104 system) areas were quite fea sible, thus requiring at least 2 octets per level to ad dress b)Two levels of routeing within a routeing domain were sufficient (allowing domains of 106107 systems) be cause it was unlikely that a single organisation would wish to operate and manage a routeing domain much larger than that. c)Administrative boundaries often become the dominant concern once routeing domains reach a certain size. d)The additional burdens and potential for error in man ual address assignment were deemed serious enough to permit the use of a large, flat space. B.3 Use of the HO-DSP field in Intra-domain routeing Use of a portion of the HO-DSP field provides for hierar chical routeing within a routeing domain. A value is as signed to a set of ISs in order to group the ISs into a single area for the usual benefits of hierarchical routeing: a)Limiting the size of routeing tables in the ISs; b)conserving bandwidth by hierarchical summarisation of routeing information; c)designating portions of the network which are to have optimal routeing within themselves; and d)moderate firewalling of portions of the routeing do main from failures in other portions. It is important to note that the assignment of HO-DSP val ues is intended to provide the routeing domain administra tor with a mechanism to optimise the routeing within a large routeing domain. The Intra-domain ISIS designers did not intend the HO-DSP to be entirely consumed by many levels of address registration authority. Reserving the assignment of a portion of the HO-DSP field to the route ing domain administrator also allows the administrator to start with a single assigned IDP+HO-DSP and run the routing domain as a single area. As the routeing domain grows, the routeing domain administrator can then add ar eas without the need to go back to the address administra tion authority for further assignments. Areas can be added and re-assigned within the routeing domain without involv ing the external address administration authority. A useful field to reserve as part of the HO-DSP would be 2 octets,permitting up to 65,536 areas in a routeing domain. This is viewed as a reasonable compromise between route ing domain size and address space consumption. The field may be specified as flat for the same reasons that the ID field may be flat. B.3.1 Addressing considerations for Inter-domain Routeing It is in the Inter-domain arena where the goals of routeing efficiency and administrative independence collide most strongly. Although the OSI Routeing Framework explicitly gives priority in Inter-domain routeing to considerations of autonomy and firewalls over efficiency, it must be feasible to construct an Inter-Domain topology that both produces isolable domains and relays data at acceptable cost. Since no routeing information is exchanged across domain boundaries with static routeing, the practicality of a given Inter-domain topology is essentially determined by the size of the routeing tables that are present at the boundary ISs. If these tables become too large, the memory needed to store them, the processing needed to search them, and the bandwidth needed to transmit them within the routeing do main all combine to disallow certain forms of interconnection. Inter-domain routeing primarily computes routes to other routeing domains33This International Standard also uses static Inter-domain tables for routeing to individual End systems across dynamically assigned circuits, and also to End systems whose addresses do not conform to the address construction rules. . If there is no correspondence between the address registration hierarchy and the organisation of routeing domains (and their interconnection) then the task of static table maintenance quickly becomes a nightmare, since each and every routeing domain in the OSIE would need a table entry potentially at every boundary IS of every other routeing domain. Luckily, there is some reason to be lieve that a natural correspondence exists, since at least at the global level the address registration authorities fall within certain topological regions. For example, most of the routeing domains which obtained their IDP+HO-DSP from a hierarchy of French authorities are likely to reside in France and be more strongly connected with other routeing domains in France that with routeing domains in other countries. There are enough exceptions to this rule, however, to be a cause for concern. The scenarios cited in B.2 all exist today and may be expected to remain common for the foreseeable future. Consider as a practical case the High Energy Phys ics Network (HEPnet), which contains some 17000 End systems, and an unknown number of intermediate systems44The number of ISs is hard to estimate since some ISs and links are in fact shared with other networks, such as the similarly organised NASA Space Physics network, or SPAN. . This network operates as a single routeing domain in order to provide a known set of services to a known community of users, and is funded and cost-justified on this basis. This network is international in scope (at least 10 countries in North America, Europe, and the far east) and yet its topol ogy does not map well onto existing national boundaries. Connectivity is richer between CERN and FERMIlab, for example than between many points within the U.S. More importantly, this network has rich connectivity with a number of other networks, including the PDNs of the vari ous countries, the NSFnet in the U.S., the international ESnet (Energy Sciences Network), the general research Internet, and military networks in the U.S. and elsewhere. None of these other networks shares a logical part of the NSAP address hierarchy with HEPnet55It is conceivable that ISO would sanction such networks by assigning a top-level IDI from the ISO non-geographic AFI, but this is unlikely and would only exacerbate the problem if many such networks were assigned top-level registrations. . If the only method of routing from the HEPnet to these other networks was to place each within one and only one of the existing registra tion authorities, and to build static tables showing these re lationships, the tables would clearly grow as O(n2). It seems therefore, that some means must be available to as sign addresses in a way that captures the Inter-Domain to pology, and which co-exists cleanly with both the adminis trative needs of the registration authorities, and the algo rithms employed by both the Intra- and Inter-domain routeing protocols. As alluded to in an earlier clause, it seems prudent to leave some portion of the address space (most likely from the HO-DSP part) sufficiently undefined and flexible that various Inter-domain topologies may be efficiently constructed. Annex C Implementation Guidelines and Examples (This annex is informative) C.1 Routeing Databases Each database contains records as defined in the following sub-clauses. The following datatypes are defined. FROM CommonMgmt IMPORT NSAPAddress, AddressPrefix, BinaryAbsoluteTime; PDU Type lspID = ARRAY [0..7] OF Octet; systemID = ARRAY [0..5] OF Octet; octetTimeStamp = BinaryAbsoluteTime; C.1.1 Level 1 Link State Database This database is kept by Level 1 and Level 2 Intermediate Systems, and consists of the latest Level 1 Link State PDUs from each Intermediate System (or pseudonode) in the area. The Level 1 Link State PDU lists Level 1 links to the Inter mediate System that originally generated the Link State PDU. RECORD adr: lspID; (* 8 octet ID of LSP originator *) type: (Level1IntermediateSystem, AttachedLevel2IntermediateSystem, UnattachedLevel2IntermediateSystem); seqnum: [0..SequenceModulus 1]; LSPage: [0..MaxAge]; (*Remaining Lifetime *) expirationTime: TimeStamp; (*Time at which LSP age became zero (see 7.3.16.4). *) SRMflags: ARRAY[1..(maximumCircuits + maximumVirtualAdjacencies)] OF BOOLEAN; (*Indicates this LSP to be sent on this circuit. Note that level 2 Intermediate systems may send level 1 LSPs to other partitions (if any exist). Only one level 2 Intermediate system per partition does this. For level 1 Intermediate Systems the array is just maximumCircuits long. *) SSNflags: ARRAY[1..maximumCircuits + maximumVirtualAdjacencies] OF BOOLEAN; (*Indicates that information about this LSP shall be included in the next partial sequence number PDU transmitted on this circuit. *) POINTER TO LSP; (*The received LSP *) END; C.1.2 Level 2 Link State Database This database is kept by Level 2 Intermediate Systems, and consists of the latest Level 2 Link State PDUs from each Level 2 Intermediate System (or pseudonode) in the do main. The Level 2 Link State PDU lists Level 2 links to the Intermediate System that originally generated the Link State PDU. RECORD adr: lspID; (* 8 octet ID of LSP originator *) type: (AttachedLevel2IntermediateSystem, UnattachedLevel2IntermediateSystem); seqnum: [0..SequenceModulus 1]; LSPage: [0..MaxAge]; (*Remaining Lifetime *) expirationTime: TimeStamp; (*Time at which LSP age became zero (see 7.3.16.4). *) SRMflags: ARRAY[1..(maximumCircuits)] OF BOOLEAN; (*Indicates this LSP to be sent on this circuit. *) SSNflags: ARRAY[1..maximumCircuits] OF BOOLEAN; (*Indicates that information about this LSP must be included in the next partial sequence number PDU transmitted on this circuit. *) POINTER TO LSP; (*The received LSP *) END; C.1.3 Adjacency Database This database is kept by all systems. Its purpose is to keep track of neighbours. For Intermediate systems, the adjacency database comprises a database with an entry for each: -Adjacency on a Point to Point circuit. -Broadcast Intermediate System Adjacency. (Note that both a Level 1 and a Level 2 adjacency can exist be tween the same pair of systems.) -Broadcast End system Adjacency. -potential SVC on a DED circuit (max

i

mum

SVC

Adja

cencies for a DA circuit, or 1 for a Static cir cuit). -Virtual Link Adjacency. Each entry contains the parameters in Clause 11 for the Ad jacency managed object. It also contains the variable used to store the remaining holding time for each Adjacency IDEntry and NETEntry entry, as defined below. IDEntry = RECORD ID: systemID; (* The 6 octet System ID of a neighbour End system extracted from the SOURCE ADDRESS field of its ESH PDUs. *) entryRemainingTime: Unsigned [1..65535] (* The remaining holding time in seconds for this entry. This value is not accessible to system management. An implementation may choose to implement the timer rules without an explicit remainingTime being maintained. For example by the use of asynchronous timers. It is present here in order to permit a consistent description of the timer rules. *) END NETEntry = RECORD NET: NetworkEntityTitle; (* The NET of a neighbour Intermediate system as reported in its IIH PDUs. *) entryRemainingTime: Unsigned [1..65535] (* The remaining holding time in seconds for this entry. This value is not accessible to system management. An implementation may choose to implement the timer rules without an explicit remainingTime being maintained. For example by the use of asynchronous timers. It is present here in order to permit a consistent description of the timer rules. *) END; C.1.4 Circuit Database This database is kept by all systems. Its purpose is to keep information about a circuit. It comprises an AR RAY[1..maximumCircuits]. Each entry contains the parameters in Clause 11 for a Cir cuit managed object (see 11.3). It also contains the remain ingHelloTime (WordUnsigned [1..65535] seconds) vari able for the Circuit. This variable not accessible to system management. An implementation may choose to implement the timer rules without an explicit remainingHelloTime being maintained. For example by the use of asynchronous timers. It is present here in order to permit a consistent de scription of the timer rules. Additionally, for Circuits of type X.25 Static Outgoing or X.25 DA, it contains the recallCount (Unsigned[0..255]) variable for the Circuit. This variable is not accessible to system management. It used to keep track of recall attempts. C.1.5 Level 1 Shortest Paths Database This database is kept by Level 1 and Level 2 Intermediate Systems (unless each circuit is Level 2 Only). It is com puted by the Level 1 Decision Process, using the Level 1 Link State Database. The Level 1 Forwarding Database is a subset of this database. RECORD adr: systemId; (*6 octet ID of destination system *) cost: [1..MaxPathMetric]; (*Cost of best path to destination system *) adjacencies: ARRAY[1..max

i

mum

Path

Splits] OF POINTER TO Adjacency; (*Pointer to adjacency for forwarding to system adr *) END; C.1.6 Level 2 Shortest Paths Database This database is kept by Level 2 Intermediate Systems. It is computed by the Level 2 Decision Process, using the Level 2 Link State Database. The Level 2 Forwarding Data base is a subset of this database. RECORD adr: AddressPrefix; (*destination prefix *) cost: [1..MaxPathMetric]; (*Cost of best path to destination prefix *) adjacencies: ARRAY[1..max

i

mum

Path

Splits] OF POINTER TO Adjacency; (*Pointer to adjacency for forwarding to prefix adr *) END; C.1.7 Level 1 Forwarding Database This database is kept by Level 1 and Level 2 Intermediate Systems (unless each circuit is Level 2 Only). It is used to determine where to forward a data NPDU with destina tion within this system's area. It is also used to determine how to reach a Level 2 Intermediate System within the area, for data PDUs with destinations outside this system's area. RECORD adr:systemId; (*6 octet ID of destination system. Destination 0 is special, meaningnearest level 2 Intermediate system *) splits: [0..max

i

mum

Path

Splits]; (* Number of valid output adj's for reachingadr (0 indicates it is unreachable) *) nextHop: ARRAY[1..max

i

mum

Path

Splits] OF POINTER TO adjacency; (*Pointer to adjacency for forwarding to destination system *) END; C.1.8 Level 2 Forwarding Database This database is kept by Level 2 Intermediate systems. It is used to determine where to forward a data NPDU with des tination outside this system's area. RECORD adr: AddressPrefix; (*address of destination area. *) splits: [0..max

i

mum

Path

Splits]; (*Number of valid output adj's for reaching adr (0 indicates it is unreachable) *) nextHop: ARRAY[1..max

i

mum

Path

Splits] OF POINTER TO adjacency; (*Pointer to adjacency for forwarding to destination area. *) END; C.2 SPF Algorithm for Computing Equal Cost Paths An algorithm invented by Dijkstra (see references) known as shortest path first (SPF), is used as the basis for the route calculation. It has a computational complexity of the square of the number of nodes, which can be decreased to the number of links in the domain times the log of the num ber of nodes for sparse networks (networks which are not highly connected). A number of additional optimisations are possible: a)If the routeing metric is defined over a small finite field (as in this International Standard), the factor of log n may be removed by using data structures which maintain a separate list of systems for each value of the metric rather than sorting the systems by logical distance. b)Updates can be performed incrementally without re quiring a complete recalculation. However, a full up date must be done periodically to recover from data corruption, and studies suggest that with a very small number of link changes (perhaps 2) the expected com putation complexity of the incremental update exceeds the complete recalculation. Thus, this International Standard specifies the algorithm only for the full up date. c)If only End system LSP information has changed, it is not necessary to re-compute the entire Dijkstra tree for the IS. If the proper data structures exist, End Systems may be attached and detached as leaves of the tree and their forwarding information base entries altered as appropriate The original SPF algorithm does not support load splitting over multiple paths. The algorithm in this International Standard does permit load splitting by identifying a set of equal cost paths to each destination rather than a single least cost path. C.2.1 Databases PATHS This represents an a

cyclic directed graph of shortest paths from the system S performing the cal culation. It is stored as a set of triples of the form aN,d(N),{Adj(N)}q, where: N is a system Identifier. In the level 1 algorithm, N is a 7 octet ID. For a non-pseudonode it is the 6 octet system ID, with a 0 appended octet. For a pseudonode it is a true 7 octet quantity, comprised of the 6 octet Designated Intermediate System ID and the extra octet assigned by the Designated Interme diate System. In the level 2 algorithm it is either a 7 octet Intermediate System or pseudonode ID (as in the level 1 algorithm), or it is a variable length ad dress prefix (which will always be a leaf, i.e. End system, in PATHS). d(N) is N's distance from S (i.e. the total metric value from N to S). {Adj(N)} is a set of valid adjacencies that S may use for forwarding to N. When a system is placed on PATHS, the path(s) designated by its position in the graph is guaranteed to be a shortest path. TENT This is a list of triples of the form aN,d(N),{Adj(N)}q, where N, d(N) and {Adj(N)} are as defined above for PATHS. TENT can intuitively be thought of as a tentative placement of a system in PATHS. In other words, the triple aN,x,{A}q in TENT means that if N were placed in PATHS, d(N) would be x, but N cannot be placed on PATHS until it is guaranteed that no path shorter than x exists. The triple aN,x,{A,B}q in TENT means that if N were placed in PATHS, d(N) would be x via either adjacency A or B NOTE - As described above, (see 7.2.6), it is suggested that the implementation keep the database TENT as a set of lists of triples of the form a*,Dist,*q, for each possible distance Dist. In addition it is necessary to be able to process those systems which are pseudonodes before any non- pseudonodes at the same distance Dist. C.2.2 Use of Metrics in the SPF Calculation Internal metrics are not comparable to external metrics. Therefore, the cost of the path from N to S for external routes (routes to destinations outside of the routing domain) may include both internal and external metrics. The cost of the path from N to S (called d(N) below in database PATHS) may therefore be maintained as a two- dimensioned vector quantity (specifying internal and exter nal metric values). In incrementing d(N) by 1, if the internal metric value is less than the maximum value MaxPathMetric, then the internal metric value is incre mented by one and the external metric value left un changed; if the internal metric value is equal to the maxi mum value MaxPathMetric, then the internal metric value is set to 0 and the external metric value is incremented by 1. Note that this can be implemented in a straightforward manner by maintaining the external metric as the high order bits of the distance. NOTE - In the code of the algorithm below, the current path length is held in a variable tentlength. This variable is a two-dimensional quantity tentlength=(internal,external) and is used for comparing the current path length with d(N) as described above. C.2.3 Overview of the Algorithm The basic algorithm, which builds PATHS from scratch, starts out by putting the system doing the computation on PATHS (no shorter path to SELF can possibly exist). TENT is then pre-loaded from the local adjacency data base. Note that a system is not placed in PATHS unless no shorter path to that system exists. When a system N is placed in PATHS, the path to each neighbour M of N, through N, is examined, as the path to N plus the link from N to M. If aM,*,*q is in PATHS, this new path will be longer, and thus ignored. If aM,*,*q is in TENT, and the new path is shorter, the old entry is removed from TENT and the new path is placed in TENT. If the new path is the same length as the one in TENT, then the set of potential adjacencies {adj(M)} is set to the union of the old set (in TENT) and the new set {adj(N)}. If M is not in TENT, then the path is added to TENT. Next the algorithm finds the triple aN,x,{Adj(N)}q in TENT, with minimal x. NOTE - This is done efficiently because of the optimisation described above. When the list of triples for distance Dist is exhausted, the algorithm then increments Dist until it finds a list with a triple of the form a*,Dist,*q. N is placed in PATHS. We know that no path to N can be shorter than x at this point because all paths through sys tems already in PATHS have already been considered, and paths through systems in TENT will have to be greater than x because x is minimal in TENT. When TENT is empty, PATHS is complete. C.2.4 The Algorithm The Decison Process Algorithm must be run once for each supported routeing metric. A Level 1 Intermediate System runs the algorithm using the Level 1 LSP database to com pute Level 1 paths. In addition a Level 2 Intermediate Sys tem runs the algorithm using the Level 2 LSP database to compute Level 2 paths. If this system is a Level 2 Intermediate System which sup ports the partition repair optional function the Decision Process algorithm for computing Level 1 paths must be run twice for the default metric. The first execution is done to determine which of the area's manual

Area

Addresses are reachable in this partition, and elect a Partition Desig nated Level 2 Intermediate System for the partition. The Partition Designated Level 2 Intermediate System will de termine if the area is partitioned and will create virtual Level 1 links to the other Partition Designated Level 2 In termediate Systems in the area in order to repair the Level 1 partition. This is further described in 7.2.10. Step 0: Initialise TENT and PATHS to empty. Initialise tentlength to (0,0). (tentlength is the pathlength of elements in TENT we are examining.) a)Add aSELF, 0, Wq to PATHS, where W is a special value indicating traffic to SELF is passed up to Trans port (rather than forwarded). b)Now pre-load TENT with the local adjacency data base. (Each entry made to TENT must be marked as being either an End system or an Intermediate System to enable the check at the end of Step 2 to be made correctly.) For each adjacency Adj(N), (including Manual Adjacencies, or for Level 2 enabled Reach able Addresses) on enabled circuits, to system N of SELF in state Up, compute d(N) = cost of the parent circuit of the adjacency (N), obtained from metrick, where k = one of de fault metric, delay metric, monetary metric, er ror metric. Adj(N) = the adjacency number of the adjacency to N c)If a triple aN,x,{Adj(M)}q is in TENT, then: If x = d(N), then Adj(M) , {Adj(M)} H Adj(N). d)If there are now more adjacencies in {Adj(M)} than max

i

mum

Path

Splits, then remove excess adjacen cies as described in 7.2.7. e)If x < d(N), do nothing. f)If x > d(N), remove aN,x,{Adj(M)}q from TENT and add the triple aN,d(N),Adj(N)q. g)If no triple aN, x,{Adj(M)}q is in TENT, then add aN, d(N),Adj(N)q to TENT. h)Now add any systems to which the local Intermediate system does not have adjacencies, but which are men tioned in neighbouring pseudonode LSPs. The adja cency for such systems is set to that of the Designated Intermediate System. i)For all broadcast circuits in state On, find the LSP with LSP number zero and with the first 7 octets of LSPID equal to the LnCircuitID for that circuit (i.e. pseudonode LSP for that circuit). If it is present, for all the neighbours N reported in all the LSPs of this pseudonode which do not exist in TENT add an entry aN,d(N),Adj(N)q to TENT, where d(N) = metrick of the circuit. Adj(N) = the adjacency number of the adjacency to the DR. j)Go to Step 2. Step 1: Examine the zeroth Link State PDU of P, the sys tem just placed on PATHS (i.e. the Link State PDU with the same first 7 octets of LSPID as P, and LSP number zero). a)If this LSP is present, and the LSP Database Over load bit is clear, then for each LSP of P (i.e. all the Link State PDUs with the same first 7 octets of LSPID as P, irrespective of the value of LSP number) com pute dist(P,N) = d(P) + metrick(P,N). for each neighbour N (both Intermediate System and End system) of the system P. If the LSP Database Overload bit is set, only consider the End system neighbours of the system P. d(P) is the second ele ment of the triple aP,d(P),{Adj(P)q and metrick(P,N) is the cost of the link from P to N as reported in P's Link State PDU b)If dist(P,N) > MaxPathMetric, then do nothing. c)If aN,d(N),{Adj(N)}q is in PATHS, then do nothing. NOTE d(N) must be less than dist(P,N), or else N would not have been put into PATHS. An additional san ity check may be done here to ensure d(N) is in fact less than dist(P,N). d)If a triple aN,x,{Adj(N)}q is in TENT, then: 1)If x = dist(P,N), then Adj(N) , {Adj(N)} H Adj(P). 2)If there are now more adjacencies in {Adj(N)} than max

i

mum

Path

Splits, then remove excess adja cencies, as described in 7.2.7. 3)If x < dist(P,N), do nothing. 4)If x > dist(P,N), remove aN,x,{Adj(N)}q from TENT and add aN,dist(P,N),{Adj(P)}q. e)If no triple aN, x,{Adj(N)}q is in TENT, then add aN, dist(P,N),{P}q to TENT. Step 2: If TENT is empty, stop, else: a)Find the element aP,x,{Adj(P)}q, with minimal x as follows: 1)If an element a*,tentlength,*q remains in TENT in the list for tentlength, choose that element. If there are more than one elements in the list for tentlength, choose one of the elements (if any) for a system which is a pseudonode in preference to one for a non-pseudonode. If there are no more elements in the list for tentlength increment ten tlength and repeat Step 2. 2)Remove aP,tentlength,{Adj(P)}q from TENT. 3)Add aP,d(P),{Adj(P)}q to PATHS. 4)If this is the Level 2 Decision Process running, and the system just added to PATHS listed itself as Partition Designated Level 2 Intermediate system, then additionally add aAREA.P, d(P), {adj(P)}q to PATHS, where AREA.P is the Network Entity Title of the other end of the Virtual Link, obtained by taking the first AREA listed in P's Level 2 LSP and appending P's ID. 5)If the system just added to PATHS was an End system, go to Step 2, Else go to Step 1. NOTE - In the Level 2 context, the End systems are the set of Reachable Address Prefixes and the set of area ad dresses with zero cost. C.3 Forwarding Process C.3.1 Example pseudo-code for the forwarding procedure described in 7.4.3 This procedure chooses, from the Level 1 forwarding data base if level is level1, or from the Level 2 forwarding database if level is level2, an adjacency on which to for ward PDUs for destination dest. A pointer to the adjacency is returned in adj, and the procedure returns the value True. If no suitable adjacency exists the procedure returns the value False, in which case a call should be made to Drop(Destination Address Unreachable, octetNumber). If queue length values are available to the forwarding proc ess, the minimal queue length of all candidate circuits is chosen, otherwise, they are used in round robin fashion. PROCEDURE Forward( level: (level1, level2), dest: NetworkLayerAddress, VAR adj: POINTER TO adjacency) : BOOLEAN VAR adjArray: ARRAY OF ForwardingDatabaseRecords; temp, index, minQueue: CARDINAL; BEGIN (*Set adjArray to appropriate database} *) IF level = level1 THEN adjArray := level1ForwardingDatabase ELSE adjArray := level2ForwardingDatabase END; (*Perform appropriate hashing function to obtain an index into the database *) IF Hash(level, dest, index) THEN IF adjArray[index].splits > 0 THEN (*Find minimum queue size for all equal cost paths *) minQueue := MaxUnsigned; temp := adjArray[index].lastChosen + 1; (*start off after last time *) FOR i := 1 TO adjArray[index].splits DO (*for all equal cost paths to dest *) IF temp > adjArray[index].splits THEN (*after end of valid entries, wrap to first *) temp := 1 ELSE temp := temp + 1 END; IF QueueSize(adjArray[index].nextHop[temp]) < minQueue THEN minQueue := QueueSize(adjArray[index].nextHop[tem p]); adj := adjArray[index].nextHop[temp]; adjArray[index].lastChosen := temp; END; Forward := true END; ELSE Forward := false (*There must be at least one valid output adjacency *) END ELSE Forward := false (*Hash returned destination unknown *) END END forward; Annex D Congestion Control and Avoidance (This annex is informative) D.1 Congestion Control The transmit management subroutine handles congestion control. Transmit management consists of the following components: Square root limiter. Reduces buffer occupancy time per PDU by using a square root limiter algo rithm. The square root limiter also queues PDUs for an output circuit, and prevents buffer deadlock by discarding PDUs when the buffer pool is exhausted. Clause D.1.1 specifies the Square Root Limiter Process. Originating PDU limiter. Limits originating NPDU traffic when necessary to ensure that transit NPDUs are not rejected. An originating NPDU is an NPDU resulting from an NSDU from the Transport at this ES. A transit NPDU is an NPDU from another sys tem to be relayed to another destination ES. Flusher. Flushes PDUs queued for an adjacency that has gone down. Information for higher layer (Transport) congestion control procedures is provided by the setting of the congestion ex perienced bit in the forwarded data NPDUs. D.1.1 Square Root Limiter The square root limiter discards a data NPDU by calling the ISO 8473 discard PDU function with the reason PDU Discarded due to Congestion when the number of data NPDUs on the circuit output queue exceeds the discard threshold, Ud. Ud is given as follows:= where: Nb = Number of Routeing Layer buffers (maximumBuffers) for all output circuits. Nc = Number of active output circuits (i.e. Circuits in state On). The output queue is a queue of buffers containing data NPDUs which have been output to that circuit by the for warding process, and which have not yet been transmitted by the circuit. It does not include NPDUs which are held by the data link layer for the purpose of retransmission. Where a data NPDU is to be fragmented by this Intermedi ate system over this circuit, each fragment shall occupy a separate buffer and shall be counted as such in the queue length. If the addition of all the buffers required for the fragmentation of a single input data NPDU would cause the discard threshold for that queue to be exceeded, it is recom mended that all those fragments (including those which could be added without causing the threshold to be ex ceeded) be discarded. D.1.2 Originating PDU Limiter TEMPORARY NOTE - Strictly this function is an End Sys tem function. However it is closely coupled to the routeing function, particularly in the case of real systems which are performing the functions of both an Intermediate System and an End System (i.e. systems which can both initiate and terminate data NPDUs and perform relaying functions). Therefore, until a more appropriate location for this infor mation can be determined, this function is described here. The originating PDU limiter first distinguishes between originating NPDUs and transit NPDUs. It then imposes a limit on the number of buffers that originating NPDUs can occupy on a per circuit basis. In times of heavy load, origi nating NPDUs may be rejected while transit NPDUs con tinue to be routed. This is done because originating NPDUs have a relatively short wait, whereas transit NPDUs, if re jected, have a long wait a transport retransmission period. The originating PDU limiter accepts as input: -An NSDU received from Transport Layer -A transmit complete signal from the circuit for an ISO 8473 Data PDU. The originating PDU limiter produces the following as out put: -PDU accepted -PDU rejected -Modifications to originating PDU counter There is a counter, N, and an originating PDU limit, originatingQueueLimit, for each active output circuit. Each N is initialised to 0. The originatingQueueLimit is set by management to the number of buffers necessary to prevent the circuit from idling. D.1.3 Flusher The flusher ensures that no NPDU is queued on a circuit whose state is not ON, or on a non-existent adjacency, or one whose state is not Up. D.2 Congestion Avoidance D.2.1 Buffer Management The Forwarding Process supplies and manages the buffers necessary for relaying. PDUs shall be discarded if buffer thresholds are exceeded. If the average queue length on the input circuit or the forwarding processor or the output cir cuit exceeds QueueThreshold, the congestion experi enced bit shall be set in the QoS maintenance option of the forwarded data PDU (provided the QoS maintenance option is present).

Back to RFC index

 

 



Sponsered-Sites:

Register domain name and transfer | Cheap webhosting service | Domain name registration

 

 

""