Thursday, May 29, 2014

Towards a TWI+PCI Implementation of IBM's Token Ring Network Architecture

The following is a verbatim copy of a response I'd written for a private discussion forum at an online university, this day. The discussion, essentially, was towards a comparison and contrast of a number of network technologies applied at "Layer 0" of the OSI seven layer model. My response, in this item, focuses specifically about token passing, as used in each of FDDI networking and in token ring networking, among all available digital systems interconnects. 

The topic of token ring networking has particularly caught my attention as a developer, considering the possibility of refactoring the technology for application onto I2C interconnets, for application ostensibly within a parallel computing model incorporating CORBA. In my response, after the items about token passing in token ring networking, I've begun "taking some notes," towards such an ostensible redesign of token ring networking for a parallel computing model onto I2C and PCI interconnects.


Token access, or token passing is a network flow control technique used in token-ring networks. The technologies for token ring networking were originally developed by IBM, in the 1970's , and may still be applicable in some network environments -- for instance, where network fault tolerance is a high priority, such as in networked industrial automation and measurement environments.

A token ring network utilizes a star topology, in which a multistation access unit (MAU) provides ports for eight individual hosts, additionally one port for a ring in cable and one port for a ring out cable. Effectively, the MAUs are connected in a ring, with individual hosts each connected to a single MAU. Infoar as it being a "star" network topology, geometrically it's not exactly like the conventional multi-connected, five-pointed star, might seem perhaps more like a ring with "leaf" nodes.

On a token ring network, after the network is initialized and an active monitor station determined on the network,  an electronic token is sent onto the network and transferred from station to station, until arriving at a station that has data to send on the network. 

At the sending station, a single bit in the token frame is changed before retransmit, thus transforming the token frame into a start-of-frame sequence for an information frame. Information is then appended to the start-of-frame sequence, that information necessarily including an address of a specific format (byte?) identifying the intended receiving station for the token. 

The information frame is then transmitted onto the network, and successively retransmitted -- whether along the MAUs only? or entirely from host to host --  until arriving at the intended receiving station. The receiving station then copies the information frame, sets two bits in the information frame to indicate that it has received the information, and will retransmit the information frame. The information frame then continues around the network until arriving at the original sending station. 

Due to the application of the token passing technique, data collision is effectively not a concern on a token ring network


Network token passing is also used in FDDI networking

Works referenced:
[1]  Cisco DocWiki Token Ring/IEEE 802.5 [2] CTDP  Token RIng.  The CTDP Network Certification Reference Version 0.6.2 

On a sidebar, with regards to developing a concept for an academic thesis ostensibly with regards to parallel computing: Token ring networking could be considered for a design of an I2C network. I2C is a "Wire protocol" available via interfaces in some single-board computing platforms and the Arduino microcontroller platform. In that regards, I would propose some notes, which I will presently try to extend on, in draft editions of this comment
  • An I2C implementation of a token ring methodology would not necessarily require an application of an MAU, per se.
  • I've not studied up about a lot of the details of I2C and the corresponding TWI protocol, however I've read that each utilizes some sort an addressing format, and a sort of centralized, single "master" design, fundamentally, in which one I2C device is a single "master" device, and would indicate via synchronization with clock signals, the address of an intended "receiving" device, before sending data to that receiving device. All devices on the I2C network would be, effectively, "Listening" at all times, no the I2C network, to detect when the "master" device indicates that a datum is to be sent on the network, and to indicate the peer address of the device to which the datum is to be sent, then to indicate the end of the transmission to that device. (Of course, I should wish to review an exacting, authoritative reference about I2C and/or TWI, at that. I've been of a presumption that it would allow for a "Multi-master' protocol to be defined -- focusing on TWI -- however, I'm not certain of how that might be approached in the exact "Wire protocol.") 
  • The ostensible I2C token ring implementation could be designed such that it would be a "Signle mastering" protocol, in which the single "master' device would be the active monitor station of the I2C Token Ring 
  • Each device on the I2C token ring network would need exactly two I2C interfaces, such that one would be the device's on I2C or rather, "TWI ring in" and the other, the "TWI ring out"
  • Termination of a TWI ring network : If it was implemented in a pre-designed hardware configuration, the hardware could be designed such that it would define a "Built in ring". I need to study more about I2C or TWI, to specify how that could be approached, in a high-level view of the wiring. 
  • Extensibility of a TWI ring network: The Monitor/Master device could be defined effectively as to provide exactly two ring network interfaces -- one for an "on board' ring, and one for an "on PCI bus" ring, if the hardware of the device network was implemented with a PCI interface on a single PCB. Then, each PCB should need to be identified with a "board number," as well as each I2C device on the PCB being identified with a "Device number", thus creating effectively a two-part addressing protocol for messages across the entire TWI "ring" including devices on other PCBs
  • Termination of a ring network as designed onto TWI + PCI interfaces: On the PCI interface, the network could be designed more like an IBM token ring network, insofar as that a single "active monitor" would identify itself, and the other PCBs on the network would be designed as to acknowledge the designation of the "active monitor". 
  • Once the networking and addressing protocols would be specified, then it could be possible to develop a peer-to-peer or client-server application on the network, e.g using the existing work of the CORBA specifications, specifically focusing on GIOP ostensibly for extension onto any single static-model TWI/PCI Token Ring Network, ostensibly for developing a parallel computing model such as could be implemented within a single PC architecture, using conventional PC peripheral interfaces including PCI.
  • It might seem unorthodox, in some ways -- if not altogether unoriginal -- if simply for it being an extension of existing electronics protocols. Of course, if it could be towards the development of a useful parallel computing model, then certainly one would wish to inquire as to what parallel computing could be used for, in real-world applications within contemporary enterprise. Personally, I'm more concerned about the hardware design, at this time. I think it could be useful in an artificial neural network design, but I've not read a lot about that, and neither about statistical computing.

Modeling the LispM – Quartus II and CADR, an Outline

Ed. Note: This article should subsequently be updated for modeling of the CADR schematic, using Xilinux ISE Web Pack, and the Papilio FPGA platform -- ideally, then to develop an I2C bus as initial extension onto the original CADR design, such that the I2C bus could be operated via the GPIO headers on the Papilio platform, in communicating with one or more other devices on the same I2C backplane. See also: TI PRU Cape for the BeagleBone platform.

CADR

  • Lisp Machine
  • Developed at MIT (circa late 1970's, early 1980's)
  • Predecessor machine was CONS
  • Described in public domain (?) AI Memos originally published by the MIT AI Lab
    • Design
    • Schematics
    • Wiring/Pinouts
    • Peripherals incl. Chaosnet network interface
  • Designed in an era when integrated circuit (IC) technology was implemented primarily in applying techniques of transistor-transistor logic (TTL)
    • ICs composed of circuits of bijunction transistors, resistors, capacitors, packaged in multiple elements within individual integrated circuit modules (e.g four SEL-D FF elements to one 25S09 IC)
    • Standard (?) TTL logic/voltage bounds
      • high/lowmin/max voltage ranges on each of TTL circuit element input and output
      • "Nondeterministic" voltage ranges (voltage high/low unspecified) on input and output
      •  Logically combined voltage input/output differentials
      • (?) Specified voltage ranges may vary by point of reference, outside of manufacturers' actual data sheets
      • Principles and metrics of voltage, current, and IC fanout
    • Circuit power consumption
    • Clock frequencies
    • Circuit fanout
    • Contrast: CMOS (cf. MOSFET transistors)
  • Four microinstructions
    • ALU
    • BYTE
    • DISPATCH
    • JUMP
  • Memory, storage, display, and networking peripherals – applied to, if not furthermore in extensions of technologies available in the epoch of CADR's design
Quartus II
  • Developed by Altera
  • Focused primarily towards design, simulation, and application of FPGA platforms
  • May be used for design and simulation of non-FPGA platforms
  • Documentation: [Q2] Quartus II Handbook, PDF edition
  • Quartus II Web Edition
    • Available for free download and install
  • Application Use Cases may include
    • Block diagram definition
    • Waveform simulation
    • Device programming (e.g System on Chip devices, Altera FPGA programming)
    • Student lab exercises, at DeVry University Online
    • Modeling of Integrated Circuit (IC) logical profiles (e.g ideal 25S09 SEL-D FF)
    • Modeling of ICs as manufactured – historically and/or contemporarily – modeling in IC material profiles (e.g AM25S09 or low-power logical equivalent) as in material limitation of the ideal logical profile of any single IC element (e.g 25S09)
      • See also: Newton's contrast of geometry and mechanics, in the Principia
  • Quartus is a platform combining a significant number of tools for Electronic Design Automation (EDA)
  • Some individual tool components in Quartus are implemented as to support standardized hardware definition languages, including:
    • Verilog
    • VHDL
    • EDIF
  • Quartus Qsys system files
    • May be used to model, in each, an IC used in the original CADR schematic
      • Refer to: 
        • [Q2], p. 270, Creating Qsys Components
        • [Q2], p. 274, Creating Qsys Components in the Component Editor
      • Note that the components will be HDL-based, deriving from the Unlambda CADR Verilog Files
      • Some new Verilog files required
        • e.g to combine four ff_dsel modules into one (new) 25S09 module
          `include "ff_dsel.v"
          
          module m25S09(s,cp,d0a,d0b,d1a,d1b,d2a,d2b,d3a,d3b,q0,q1,q2,q3);
              input s,cp,d0a,d0b,d1a,d1b,d2a,d2b,d3a,d3b;
              output q0,q1,q2,q3;
          
              ff_dsel d0(q0,d0a,d0b,s,cp);
              ff_dsel d1(q1,d1a,d1b,s,cp);
              ff_dsel d2(q2,d2a,d2b,s,cp);
              ff_dsel d3(q3,d3a,d3b,s,cp);
          endmodule
          
        • must be determined per each individual IC defined in the schematics
    • Qsys IC component definitions may be combined within the Qsys System File editor
      • Refer to [Q2],, p. 195, Creating a Qsys system
Modeling CADR in Quartus II
  • Primary references
    • Public-Domain CADR Lisp Machine schematics, wiring, and design documents, published originally in MIT AI Memos
    • IC manufacturers' data sheets (archived)
  • Supporting resources
    • Verilog files for CADR, as published by Unlambda.com
      • Developer Notes:
        • Licensing not specified, assumed "Public domain"
        • Does not provide an exact model of the schematic – for instance, four Unlambda CADR Verilog ff_dsel (shared sel, clk pins) to each 25S09
        • CADR may be modeled, alternately, in other hardware defintion languages
          • NGSPICE defines a standard D flip-flop element, unbuffered. When defined in a module with buffer, then it might be compatible with Unlambda CADR Verilog ff_dsel and likewise, the original 25S09 IC logical profile.

Tuesday, May 27, 2014

OS Platform Compatibility, in a Laptop Recovery View - Filesystem Compatibility, Binary Incomptibility

Recently, my primary laptop became effectively bricked, insofar as booting from the laptop's internal hard drive and booting to the MS Windows 7 installation, on that laptop. It's a Toshiba model, an A665, with a thin form factor not affording much air circulation internal to the laptop. The laptop has a wide screen and a contemporary Nvidia graphics card. It has been my "main laptop," for a couple of years. Tasks that it is not bricked for would include:
  •  Boot from external DVD drive or flash media
  • Chroot to the Linux partition on the internal hard drive -- the SystemRescueCD can be used to manage the chroot environment of the same, for simple recovery tasks, albeit with some concerns as in regards to kernel compatibility and device creation under devfs
  • Mount the Windows main partition or either of the recovery partitions on the internal hard drive
I've tried to fix the MBR on the internal hard drive, using TestDisk, such that is distributed on the SystemRescueCD. However, the laptop is still not bootable from its internal hard drive. I plan on holding onto the laptop nonetheless, until if I may be able to replace the internal hard drive with an internal SSD drive, and to re-image the "Good partitions" from the existing internal hard drive onto the ostensible SSD drive, certainly not carrying over any "Bad bits" from the MBR sector or however.

In short, I can boot the laptop from an external DVD drive, and can mount and chroot to the Linux partition on the laptop, and from there, I can mount the Windows partitions. It's fine for filesystem recovery, but with the internal hard drive in an unbootable condition, and with the recovery media unable to boot the existing Windows partition -- though it can mount the same --  to my observation, it's effectively "Nil" for running any of the Microsoft Windows software installed on the same.

I've reverted to an older laptop, an older Toshiba, such that has Windows Vista installed. That's been sufficient for running the software required for courses at DVUO, however I am certainly interested in rendering my "Main laptop" usable, again, at least on its Linux partition, and without the DVD dongle. It would serve at least as a cross-compiling platform for the ARM architecture.

As of when my "Main laptop" became unbootable, then my Chromebook became the effective "mainstay" for developing on Linux, on my "Home network." It's a Samsung Chromebook. Samsung Chromebooks use an ARM architecture -- I'd chosen the particular model of Chromebook, as due to that, when I purchased the Chromebook from BestBuy, on my student budget. I've since enabled Developer Mode on my Chromebook and installed a KDE desktop environment into a chroot, using Crouton, furthermore referencing a "How To" published by HowToGeek, How to Install Ubuntu Linux on Your Chromebook with Crouton.

The chroot environment on the laptop is using Ubuntu Linux, edition 12.04. "Precise". The Linux kernel is version 3.8.11, armv7l architecture.

To my understanding, QEmu is unable to create a VM on the ARM architecture. Virtualbox is not available for Linux ARM, either. Failing any other alternatives for OS virtualization on Linux ARM, my Chromebook therefore cannot serve as a virtualization host.

There are a number of software applications that I would prefer to use on my Chromebook -- including Modelio, and the latest edition of the Eclipse IDE -- but which are not immediately available on Linux ARM, and are available on amd64, such that is the architecture of the Linux installation on my main laptop. Given that my ARM chromebook cannot emulate an amd64 environment, not to my understanding, I shall have to either do without those applications, indefinitely -- an undesirable situation, certainly -- or I shall have to figure out how to recover my A665 laptop, at least so far as that it can boot without the "DVD dongle"

My A665 laptop can boot from USB. Of course, all that I need to have it boot from USB would be: The boot loader, ostensibly. There's already a working configuration of a Linux root partition and swap partition, accessible after boot, on the internal hard drive of the laptop. I might as well wish to install a minimal OS configuration onto the flash media, however -- something that could at least run a shell, in an event of that flash media being the only usable media in the laptop.

I've yet to find a "Pre-built" Linux USB thumb drive distro that would be clearly compatible with this particular use case -- basically, just an external, chained boot loader -- moreover in a minimal filesystem configuration.

This morning, I have begun to read up about Ubuntu Core, a minimal Linux distro.

If I was able to run a virtualization environment on my Chromebook, I could begin to configure a thumb drive for using Ubuntu Core, immediately, using chroot and/or QEmu. There would be a certain matter of "platform difference", as between the host platform and the target platform, however, when my Chromebook (ARM) is the host platform, and the A665 (amd64) is the target platform.


Here is a model for a Platform view of my own PC environment, expressed in a sort of ad hoc UML notation:

[Target Platform <<Platform>>]-*----+-[Filesystem]
  |
  |+
  |
  |
  |+
  |
[Host Platform <<Platform>>]-*----+-[Filesystem]

In the situation in which the Ubuntu chroot on my Chromebook (armv7l) would the ostensible "host platform," and the "target platform" is amd64, but the former is a host platform that is currently unable to run virtualization software, to my best understanding. The ARM and amd64 architectures would be  incompatible, moreover, as due to "something in regards to ABI,"broadly.

Effectively, an ARM Chromebook cannot serve as a complete host platform for building and testing of a recovery disk for an amd64 OS.

Of course, a host platform running an ARM Linux OS can be used to transfer a filesystem image to a storage device supported by the host platform, and can mount an EXT2 or later edition EXT_ filesystem from a supported storage device or, can mount a filesystem directly from a block device image -- like as in a manner of flesystem compatibilityHowever, if the target platform does not implement a processor architecture similar to that of the host platform, then the host platform cannot  effectively chroot to the target platform's root filesystem, as the machine-dependent files on the chroot 'target' would be binary incompatible with the machine architecture of the host platform.

In summary:
  • ARM and AMD64 Architectures : Binary incompatible, in machine-dependent files
  • EXT4 Filesystem: Compatible with any OS such as can mount an EXT2 or later EXT_ edition filesystem. Mounting an EXT4 filesystem as EXT2, of course, would entail a certain loss of features available in EXT4 (cf. filesystem journaling) for the duration of time in which the filesystem would be mounted, in the host OS.

In extension: Use cases for a dynamic host platform modeling interface

  • Filesystem Imaging
    • Integration with functional filesystem management tooling (favoring command-line tools)
  • Cross-Compilation
    • Integration with functional distribution management tooling (favoring command-line tools)
  • Network Management
    • Integration with functional user authentication interfaces (Kerberos, etc)
    • Integration of network models (Ethernet/WLAN, IPv4 protocol stack, etc)
    • Integration with networked host control interfaces
      • SSH - Command line interface
      • CORBA - Distributed network host management interface TBD

Monday, May 26, 2014

Notes – Serial Protocols

On one ambitious day, shortly ago, I'd found some resources in technical academia online, namely as with regards to serial "wire protocols" developed in digital electronics. I'd like to share that small part of my student notes outline, here, that I think I should wish to make reference to that outline, later on, and that this small outline could be of any interest to the arbitrary reader.

Ed. Note: I'd cut and pasted the following outline from within my own student notes, originally on a mobile tablet computer. I'd used the uxWrite app, on an iOS mobile tablet, for these notes. The uxWrite app, effectively, ensured that the text I'd copied from the uxWrite app was represented in Markdown format, when it was pasted into the 'blog entry, in the blogTouch app. A study of application cut/paste protocols would be left as an exercise to the reader, as would the reformatting of this outline into HTML format, immediately.


  - Serial Peripheral Interface (SPI)

      - “Three wire” protocol (Clock, SOMI, SIMO) \[eLinux:BBSPI\]

      - Microelectronics

  - UART, USART \[Durda\]

      - UART: Universal *Asynchronous* Receiver/Transmitter

      - USART: Universal *Synchronous-Asynchronous* Receiver/Transmitter

      - Synchronous Rx/Tx between single sender and single receiver

          - Clock signal provided by sender

          - “Strobe” signal

      - Asynchronous Rx/Tx

          - “Start bit”

      - As UART protocols \[Durda\] RS232, EIA232 \[Strangio\]

          - EIA232

              - Update of RS232

              - Conventionally, uses a modem interface

              - 22 pin (including ground) serial protocol with flow control and secondary data channel (e.g used for retransmit after parity error) using DB25 connectors

              - 9 pin (including ground) serial protocol with flow control, using DB9 connectors

  - PWM

  - I2C \[Magda\]

      - Multiple devices per I2C channel

      - “Two wire” protocol (SCL, SDA)

      - ***Compare***: TWI \[[Wikipedia](http://en.wikipedia.org/wiki/I%C2%B2C)\]

  - Serial ATA \[Lee\] (SATA)

      - 1.5 Gb/s, 3 Gb/s, 6 Gb/s

  - PCI Express \[Lee\] (PCIe)

      - PCIe 2 : 5 Gb/s

      - PCIe 3 : 8 Gb/s

Works referenced:

\[eLinux:BBSPI\] Embedded Linux Wiki. *BeagleBoard/SPI*. 2013. [Available (HTTP)](http://elinux.org/BeagleBoard/SPI). Accessed 9 May 2014

\[Durda\] Durda, Frank. *Serial and UART Tutorial*. 2014. [Available (HTTP)](https://www.freebsd.org/doc/en/articles/serial-uart/). Accessed 9 May 2014

\[Strangio\] Strangio, Christopher E. *The RS232 Standard, A Tutorial with Signal Names and Definitions*. 2012. [Available (HTTP)](http://www.camiresearch.com/Data_Com_Basics/RS232_standard.html). Accessed 9 May 2014

\[Magda\] Magda, Yury. *Raspberry Pi Measurement Electronics: Hardware and Software*. 2014. Kindle edition.

\[Palermo\] Palermo, Samuel. *High-Speed Serial I/O Design for Channel-Limited and Power-Constrained Systems*.

\[Lee\] Lee, Edward W. *High-Speed Serial Data Link Design and Simulation.*

Friday, May 23, 2014

Notes - Towards an Eclipse Kepler Distribution in Debian Packages, a Domain Model for Computer Operating Systms, and CCL for FIrefox OS

At a time earlier, this week, I'd installed Eclipse IDE via the Ubuntu 12.04 "Precise" package repository,  under the chroot environment installed on my Samsung Chromebook, which runs Linux on an ARM architecture -- as I'd noted in a blog entry, along with a convenient Eclipse platform versions/names table excerpted from Wikipedia, the Free Encyclopedia
My having observed that the Eclipse IDE, in its version as available via the Ubuntu package repository --  correspondingly, via the Debian source package, eclipse -- that it is not the latest release version of the Eclipse IDE, moreover that that entails that some plugins such as of the Eclipse Papyrus modeling tools project will not be available to the user, in their newest versions, when  installing the Eclipse IDE via the Ubuntu package repository, I've begun looking into the question: How to build Eclipse 4.3, "Kepler", for Ubuntu on Linux ARM?

Of course, although a Linux ARM build of the Eclipse IDE is available via both Debian and Ubuntu, however the Eclipse.org web site does not provide an Eclipse build for that architecture. Eclipse.org does provide downloads compiled for Linux x86, in 32 bit and 64 bit architectures, but nothing for running the Eclipse platform on ARM.

The Debian source package, eclipse, publishes a convenient Git repository, in a sense of both browsable (HTTP) and cloneable (GIT URI) repository services. There's also a convenient, moreover exhaustive list of build dependencies, published at the page for the Debian source package, eclipse. So, with that great level of developer convenience being available, about the Debian eclipse source package, and with my being interested in effectively updating the Debian eclipse source package to provide Eclipse 4.3 -- at least, in a local prototype build --  I've cloned the Git repository for the Debian eclipse source package.

When taking a look at files under the 'debian/' directory of the cloned source tree, I've found the file, 'debian/README.Source', in which some core features of the architecture of the source package are described, including:
  • The Eclipse Build component of the Eclipse Linux Tools project
  • A platform source archive, eclipse-${VERSION}-src.tar.bz2
Alternately, one may follow the Eclipse Linux Tools Project' Eclipse Build instructions unadordned, to the latest Eclipse source code, on ARM, albeit without the additional Debian packaging for redistribution -- substituting a Debian package dependency management command line tool such as aptititude instead of yum.

.... and that would be fine, except for the dependency on Jetty. Ubuntu 12.04 "Precise" does not provide a jetty-server.jar file. There is, however, a jetty8-server.jar available in the libjetty8-java package, in the "Trusty" edition (file list)

So, for backporting Jetty 8.1 from "Trusty" to "Precise" (source-package jetty8, "Trusty" edition) some of the build dependencies for jetty8 must also be backported from "Trusty", namely:
  • libgnumail-java (>= 1.1.2-7) source package libgnumail-java
  • libtomcat7-java (>= 7.0.28) source package tomcat7 
  • glassfish-jmac-api (>= 1:2.1.1-b31g-2) source package glassfish
Of course, one might expect that a backport of each of Apache Tomcat 7 and Glassfish  might require, in each, a further set of additional backport builds. 

Failing the availability of a an ARM port of "Trusty", this would be one approach towards building Eclipse 4.2 "Kepler" on the Ubuntu "Presice" release, in the ARM architecture of that release of the Ubuntu GNU/Linux distribution.

Towards a Domain Model for Operating Systems and OS Distros


This backporting task, it may be further facilitated with the development of some additional software tools. Specifically, if developing a Common Lisp program to facilitate this package backporting process, one could begin by developing an abstract domain model out of this generic software task, in Common Lisp, such as:
  • Class: Architecture
  • Class: Release
  • Class: Distribution
One could then extend that abstract domain model into an instance-specific model, such as for an application of that msamodel onto the Ubuntu GNU/Linux distribution. Of course, one might be aware of that the Ubuntu GNU/Linux distribution derives from the Debian GNU/Linux distribution, and uses the same essential software dependency management and software distribution tools, in the same as Debian GNU/Linux and most distributions deriving from Debian. 

That would be to the definition of a toolchain disjunct to that of the Red Hat and Fedora GNU/Linux distributions -- such as  Red Hat's commercially-supported enterprise edition distros like RHEL, or a Fedora Spin distribution, such as the Fedora Electronics Lab.
In a modeling sense, one may define the domain model such that the class, "Linux Distro" would be defined as to extend "Distribution", a class, and that each of "Redhat Distro" and "Debian Distro" would be defined as to extend the class, "Linux Distro".  

Subset of Domain Model: Non-Linux RTOS Distros

Furthermore, though it might diverge from the set of operating system platforms ostensibly compatible with the Debian Free Software Guidelines (DFSG) in any core portion, one could consider defining a class, "QNXNeutrino" as an extension of the class, "Distribution," as when the class "Distribution" would represent a broad sense of an "Operating System Distribution", if not specifically "Realtime Operating System (RTOS)  Distribution"

In the RTOS domain, there's also FreeRTOS, itself a free/open source platform with commercial support available (summary of FreeRTOS licensing)

RTOS Linux

Sure, it may be viable to apply Linux as an RTOS. However, it might yet be difficult to find exact commercial support for RTOS applications of Linux. Certainly, there's commercial support available for Linux in server applications, and for Linux in desktop applications. However, as far as Linux in industrial automation systems, automotive passenger device systems, and other ostensible applications for RTOS frameworks, one might wonder if there is any commercial support available for such application, of Linux "as yet." In conducting one's research, then, one might find that there are a number of inactive projects for RTOS Linux, including:

In a contemporary regards, there's Xenomai and the corresponding RTAI interface for timing-restricted applications, ostensibly with an XML-based server/RPC compliment, RTAI-XML, perhaps as an alternative to Realtime CORBA.

So, indeed, there's RTOS support of a kind, available for applications of the Linux platform. Sure, it might not meet with the most immediate demands of the world's boldest Ringworld engineers -- with apologies to fans of Larry Niven. However, with some patience and effort, it could turn out to be all of a commercially viable RTOS platform -- among things that one would not be able to add to a formal academic thesis.

Subset of Domain Model: Lisp Machine Operating Systems

Sidebar: Common Lisp is not BASIC is not a Horse Race

The reader might not be familiar with the author's position about the Common Lisp platform. Sure, in a personal regards, it's been nothing short of a long, rough road to the author, in studying the Common Lisp programming language and its applications, thus far, and certainly it's a none too commercially common platform, in the industry of the epoch. As a programming language, Common Lisp is certainly a more developed programming language than BASIC, though. For instance, the BASIC language -- at least in its Tandy Color Computer 3 edition -- doesn't feature, in itself, any sort of an object-oriented programming model such as in ANSI Common Lisp (CLtL2), let alone anything to compare to the full metaobject protocol (MOP) in Common Lisp. GOSUB is still pretty cool, though, at least for "that kind of software programming" -- such as of virtual horse races on a console screen, or a financial break-even analysis for a small/home office, and so on, mechanically.

The reader would be even less likely to have been familiar with either of the author's grandfathers. Some things would go unknown, in life, for a period of time.

'Boston Beans', the Gazebo Hub Lisp History Project Edition 1

If every project needs a novel name to denote it by, I'd choose a practical name for the beginning of a history of the Common Lisp programming language and its implementation in computing machines -- nothing too ostentatious, a humble can of Boston Baked Beans, the inspiration (via Amazon.com Prime Pantry, in a pinch.)

So, then, but how does one set out, practically, to begin to define a history of the Common Lisp programming language and its applications in computing machines? Would it require anything short of the expeditious skill of the discoverers of the Elephant computer, if not  the mathematical and mechanical skills of her original engineers? (Heritage Universe, Wikipedia) 

Sure, a can of baked beans suffices, in such an extensive endeavor -- nutritious food, sugary but kind, good foodstuff extended with a few items of bread, beside.

To begin to define a model subset of an Operating Systems model, for Lisp Operating Systems, one might begin at such a practical beginning as one would know, and by that I am referring to a private, ongoing study -- albeit, a study of only some few months' duration, "thus far" -- of the design of the CADR Lisp Machine.

Though the author of this item understand that CADR was neither the first nor the ultimate Lisp Machine -- in itself, was preceded by the CONS Lisp Machine, and succeeded by the Symbolics Lisp Machines -- there is simply the practical concern that there are schematics and manuals available, all of legally and ethically, as public domain resources -- books detailing the design and operations of the CADR Lisp Machine and its corresponding operating system, items originally published by MIT's AI Lab. Ostensibly, those same schematics could be studied and redesigned for application in developing a Lisp Machine prototype onto an FPGA platform -- as whether or not one may  have applied oneself to schoolwork and furthermore, well enough for one to be a student of MIT, as the author of this item is not, albeit with regrets.

In that much, such a study would not serve to explain any of how a contemporary Common Lisp implementation may be effectively interfaced with a contemporary host operating system, such as Debian GNU Linux. In a senser, there might even be a sort of "Crossover" determined between definitions of "Lisp Implementation" and "Operating system", in which the original Lisp Machines would have provided both "Lisp Implementation" and "Operating System," in any single, complete "software package," as one might presume.

Of course, a history of Common Lisp might represent likewise a history of software, hardware, and engineering, so far as conducted to the development of Lisp Machines and Lisp programming platforms with, in each, a user interface of some kind.

That, in itself, might not insomuch represent a history of the development of multi-user networked operating systems and industry specifications such as POSIX and the X/Open Portability Guide (XPG). Sure, it's not a monopoly, computer history, whether by way of Lisp Machine, or CP/M, or UNIX, or NeXT, or otherwise.

...and sure, there is history in computing, and there are contemporary models, in computing. It's unlikely that the later could have developed without the former -- it being not like any matter of the lucky turnip or the spontaneous "theory" undeveloped of practical application. One might choose to study either and both, then, in developing a contemporary application of computing, in a generic sense. Alternately, one might wonder: Of whence would new applications be developed, except of existing work?

Towards a "State" of the Turnip - Principia "Turnip" Milestone

Sidebar untold, momentarily: The Principia Project, and the Gazebo Hub.

With regards to the question of "Existing work," but presently, leaving aside the epistemological discussion with regards to patent guidelines -- i.e patent laws -- such as might exist in any single world nation of somesuch commercial development, there's still the question of how to define a subset of a model for operating systems, if specifically about Lisp Operating Systems -- considering not only that the history of Lisp Machines might be scarcely known, in the contemporary industry, but furthermore that there may be a hybrid model defined, of a Lisp/Linux system. Certainly, the a project for development of a hybrid Lisp/Linux system, such a projecxt could be conducted towards a more immediate commercial interest, in contemporary computer platform development, broadly, "The Industry." If it could seem to be a platform seeking of a sense of relevance for popular interest, one could sketch out a few ideas, at that -- all of which might seem more geared toward crowdsourcing than towards any manner of a conventional corporate model:

  • At a server tier: Definition of a parallel computing model for parallel logical and statistical calculations onto the Amazon Elastic Compute Cloud (EC2) and affiliated components published by Amazon Web Services (AWS), using Lisp on Linux
  • At a desktop tier: Definition of a data flow model  for Common Lisp programs, and a corresponding graphical interface, inspired by National Insturments (NI) LabVIEW(TM), using Lisp on Linux, with extensions specifically for using interfaces available in single-board computing platforms such as BeagleBone Black and  the PRU-ICSS  module in the TI Sitara MCU on the BeagleBone Black, for data-flow programming onto I2C device networks, inspired by NI LabVIEW(TM). 
  • At a device tier, in a long term effort: The lovely project of transcribing the CADR schematics into an EDA format, revising that same schematic for a single device platform's memory, clock, and other interfaces, ostensibly using the Papilio FPGA platform, and then implementing something like a Lisp operating system onto that same platform (viability unknown). 
  • Alternately, at a device tier, for defining a further computing platform, though without designing a "new OS", initially: Linux as RTOS, and Lisp, "Various projects", many of which might be ad hoc, in a sense, no doubt (recommended focuses: CORBA, I2C, parallel computing, ostensibly towards the definition of a realtime computing supercluster using Linux and single board computers having  I2C interfaces). See also: Liu, B., Y. Wen, F. Liu, Y. Ahn, and A. Cheng. "EcoMobile: Energy-aware Real-time Framework for Multicore Mobile Systems." (2011).
  • Alternately, at a device tier, using only existing computing platforms without further interfaces and extensions: An investigative study of Clozure Common Lisp (CCL) and McCLIM extended for Cairo on the FirefoxOS platform. See also:


(This is 'blog entry draft 6, 2 June 2014)

Wednesday, May 21, 2014

Notes - Applications for Memory Locking, and a Thesis Outline about Real Time Application Design

While studying some materials with regards to applications for Common Lisp programs, in integration with Amazon Web Services (AWS), I began to read about the process of request signing for the REST API available onto select AWS applications. This morning, I've found the following resources, specifically in that topic domain:
In regards to the secret key component, specifically, I thought it was an apropos time to begin studying about memory locking.

In that course of study, I'd found documentation about the POSIX mlock() function, as implemented in GNU LibC on the Linux platform  namely, manual page mlock(2), in the Linux Prorammer's Manual

It was in reading the notes section of that manual page, when I discovered a certain item of some tangential interest, as highlighted in the following excerpt  emphasis and link added, for purpose of clarity and informativeness, respectively, as namely with regards to the matter tangential to this study with regards to signing and authentication of AWS REST API requests.
Memory locking has two main applications: real-time algorithms and high-security data processing. Real-time applications require deterministic timing, and, like scheduling, paging is one major cause of unexpected program execution delays. Real-time applications will usually also switch to a real-time scheduler with sched_setscheduler(2).

manual page mlock(2), in the Linux Prorammer's Manual
Of course, if in trying to address memory locking, if for purpose of both secure data processing and real-time application design, simultaneously, it might serve to create a sort of priority deadlock. Thus, I arrive  essentially  at the matter of why I've begun writing this notes page, this morning.

Real-time application development — such as I understand  would be a concept rather broader than, specifically, the Common Lisp programming platform.  As a software developer familiar with the Common Lisp programming platform, I may wish to specialize my own point of view about the concept, as though it was a concept of some meaning, as if exclusively with regards to Common Lisp software programming. In that sense, I might like to endeavor to "Brand" the concept, however  — speaking as a student of computer networking, somewhat familiar with some Microsoft products  — I'm afraid that my own specialized "Brand concept" would not be sufficient as to really address the nature of the broader concern, in any sufficiently technical regards.

My logical bias about the concept, thusly denoted, I would like to gather some notes, here, as with regards to design, development, and implementation or real-time applications. It's my point of view that the following observations, broadly, might "stand to reason," so far as one might logically ascertain in a sense of focus, if not of some practical, perhaps by in large personal experience with regards to computer systems and software programming.

Causes of execution delays in software programs
  • Execution Delays, in an Event-Oriented View
    • Garbage collection (GC)
    • Virtual memory paging
    • Linear network/communication delays
      • I/O blocking
        • Causes (TBD)
        • Mitigating factors
          • I/O Scheduling
          • ...
      • Network latency
        • Link-level latency
        • Packet routing
    • Schedule Wait
      • Event Networks
      • Linear dependencies onto other events
  • Execution Delays, in a Composite View
    • Total momentary delay time 
      • As in: The sum of event delay times among individual processes 
        • Aside: View of an application as a finite state machine (i.e. FSM)
      • Mechanical delay time
        • measured as a discrete time value
        • the difference of (1) expected time of completion and (2) current time
    • The "Butterfly factor"
      • Nondeterministic systems
      • Unexpected delays
      • Mitigating factors:
        • Expectations Resilience
        • Design Resilience in Systems Planning and Development
        • Advice
          • Painter, Bob Ross
          • Proverbs Literally of Patience, a selection of: 
            • "Ants aren't a strong species, yet they prepare their food in the summer"  Proverbs 30:25, International Standard Version.
            • “Simplicity, patience, compassion. / These three are your greatest treasures. / Simple in actions and thoughts, you return to the source of being. / Patient with both friends and enemies, / you accord with the way things are. / Compassionate toward yourself, / you reconcile all beings in the world.” — Lao Tzu, Tao Te Ching. excerpt.
            • Guns 'n Roses, Patience. music video
Commentary (ad hoc)

Of course, if I was to develop a singular thesis about the concept, then I should certainly like to detail each of those technical topics, further. Of course, I should have to elide the subjective/creative/interpretive matter of advice, in that.
  
Candidly: If at this time, I was enrolled in any sort of a formal graduate studies program, academically, then perhaps it might be more obviously relevant to the academic institution to which I am enrolled that I would begin to write a thesis paper. Failing that, however — candidly — I cannot expect to find either guidance or personal incentive, to develop any such outline into a thesis paper. That being, no doubt, a task requiring of no small sense of focus, I shall prefer to leave it at an outline, at this time.


Tuesday, May 20, 2014

Eclipse IDE - Versions / Version Names Table

In the interest of having a UML tool available on my Chromebook's Crouton Chroot (KDE) I've installed the Eclipse IDE.

My Chromebook is a Samsung Chromebook, and it uses an ARM microcontroller unit. I've installed the Eclipse platform from the Ubuntu 12.04, "Precise" repository. The Eclipse Platform edition available via "Precise" is at version 3.7.2, i.e Indigo.

In the interest of disambiguation, I've found a convenient table of version number/name pairs, at the Eclipse (software) page at Wikipedia, the Free Encyclopedia. With reference denoted directly to that resource, for the following direct copy of that table, here is the version/names table, "to date".


Version NameDatePlatform versionProjects
N/A21 June 20043.0[14]
N/A28 June 20053.1
Callisto30 June 20063.2Callisto projects[15]
Europa29 June 20073.3Europa projects[16]
Ganymede25 June 20083.4Ganymede projects[17]
Galileo24 June 20093.5Galileo projects[18]
Helios23 June 20103.6Helios projects[19]
Indigo22 June 20113.7Indigo projects[20]
Juno27 June 20123.8 and 4.2[21]
Juno projects[24]
Kepler26 June 20134.3Kepler projects[25]
Luna25 June 2014 (planned)4.4Luna projects[26]
Mars24 June 2015 (planned)4.5Mars projects[27]

The formatting and footnotes from the original Wikipedia article are preserved, here, respectively in the interests of brevity and informativeness.

The Ubuntu Debian source package for the Eclipse platform is eclipse. The source package is currently available at Eclipse Platform version 3.8, i.e Juno

I'm not sure why the Ubnut Ubuntu eclipse package is not keeping up with Eclipse IDE platforms releases. Kepler would be the preferred edition, certainly.

So, alternately, I can try to open up such a proverbial can of worms as would entail compiling Eclipse 4.3 "From scratch." For some reason, I'm unable to download an edition compiled for Linux ARM, if directly from Eclipse.org, and the edition at Ubuntu.com is not "Up to latest" The Ubuntu edition does, however, match the version of the edition available via the Debian eclipse source package , currently available at 3.8.1-5.1 in Debian Jessie and the "continuous" Sid edition.

Presently, I think I'll just rely on the IDE's built-in package upgrades system. If there have been any changes in the SWT binding or the JVM interface since Eclipse Platform 3.7, then in a word, "Darn," those won't be available on my Chromebook, unless I can figure out how to compile Eclipse 4.3 directly from source, either on my Chromebook, or (more complex) on another platform, with cross-compile.

I think I'll just use the built-in Eclipse plugin uprgades feature, "for now", and so inasmuch I'll have to use the  Indigo edition of the Eclipse Papryus plugins.  "Good times...."

IPv4, IPv6, CORBA, and also: Device Filtering on IP Networks

The following is an item that I'd written for an online class, today. The topic for the forum was namely a matter focused about subnetwork addressing, in classless (i.e CIDR) and classful IPv4 addressing.

In writing my response for the forum item, I've arrived at a couple of possible thesis topics, including
  • IPv4 addressing considered limiting for CORBA application networks
  • MAC addresses not considered sufficient for network device authentication
My response was as follows. For convenience, I'm just going to cut and paste my own response to the forum item, here.

As an obvious "Note to self," here shared in a sense, I think that it would behoove me to continue to study the IPv6 implementation on Linux.

Separately, there's the matter of MAC filtering, MAC spoofing, and alternately VPN architectures.

Ed note: Blogger's HTML editor has "Some issues" in regards to how it applies Cascading Stylesheets. I've had to process this HTML by hand, so that it will be legible to the reader. Of course, I used Emacs, for the quick markup change, in this item


Functionally, a subnet mask is a bitmask for the sequence of consecutive bits of an IPv4 address, with "all ones" for the bits that represent thenetwork address portion of the IPv4 address [ TCP/IP Guide Reference 1 ,  Reference 2 ]

Whether in CIDR addressing or classful addressing, subnetting revolves, functionally, around the application of a subnet mask. 
In classful addressing, for address classes A through C, the classful IPv4 address' subnet masks is calculated at, each, a full byte boundary in the IPv4 address.  [ TCP/IP Guide, Reference 3] Class A addressees have a subnet mask of 8 bits (one byte) in length, class B subnet masks are 16 bits in length, and class C address, 24 bits in length, with the remaining bits being avaialble for host addressing and subnetting on a single network.

In CIDR, a subnet mask may be defined at any any effective bit length less than 32  [TCP/IP Guide Reference 2].

For example, on a network using CIDR addressing, when the assigned IPv4 netmask for the network is 28 bits in length, 4 bits are then available for addresses for hosts and subnet gateways, on the network. That equates to 2^4 total addresses, namely 16.


If one may introduce a sidebar with regards to network address translation, as in an interest of informing the discussion with regards to IPv4 addressing:

Individual subnets of a network, if using network address translation, may use any of the IPv4 address ranges reserved for private IPv4 networks [ RFC1918 ] for addresses on any single, local subnet. In a network architecture using network address translation, the number of available host addresses, across all subnets -- rather, the number of "client" host addresses would be effectively infinite, i.e not a number

Network address translation introduces a corresponding concern, as with regards to availability of network services on a subnet M, for such servcies as must be available to hosts on any subnet not being a subnet of M. That may be addressed, effectively, with port fin orwarding at the network gateway and/or firewall on subnet M, wherein each individual network service that must be available on "Not M" must be assigned to a single port "P" on the network gateway for the "M" network itself. The packet filtering framework on M must then configured to forward packets delivered to port "P" of the network gateway, to forward those packets specifically any single port P' (P prime, may or may not be equal to P) on a host A on subnet M, such that host would be providing the required network service, on port "P prime". Of course, the packet filtering framework must also be configured correspondingly for network address translation, such that packets sent from A, in response to packets sent to A:P will be appropriately forwarded back to the requesting network peer, on its corresponding client socket port.

Candidly, on a further sidebar: A CORBA application architecture might not scale as well on an IPv4 network using network address translation, not as well as on an IPv6 network not using network address translation. (Considering that I've an interest in developing CORBA network services for consumer applications, and clearly this matter of IP address availability would pertain to CORBA service implementation, then I hope that it may be an apropos topic to continue about, briefly, here). In short, if a CORBA ORB on a network not M must access a CORBA ORB on a network M, and the gateway for M is using network address translation, then the number of ORBs that the ORB on not M may connect to is effectively limited to the number of available network ports on the network gateway for M. Moreover, in this situation, the configuration for network port forwarding on M may be decisively non-trivial. So, for a network of CORBA services, IPv6 network addressing would be preferred.


Continuing on that example, the network gateway for subnet M may be implemented with a Linux host, in which instance, Linux kernel netfiltermodules and the iptables command may be used for configuring the effective port forwarding schema for a subnet [ NAT HOWTO for Linux 2.4 and later kernel versions]. Citing personal experience, a Linux host with at least two network interfaces would be sufficient -- one network interface, dedicated to the "upstream" network, and the second network interface, connected to a switch on the Local Area Network or LAN.

Personally, I've not had any direct experience with server rack configurations. It's my assumption that there may be "Rack units" available for using Linux as a network gateway and firewall provider. I'm familiar with Linux networking, not really familiar with Windows NT networking. Though Linux  might seem to have a more "Bare bones" user interface, primarily at the shell commands used for configuring such as a netfilter system, but I think it's easier to understand the expected results of configuration changes, using Linux shell commands, rather than some GUIs, candidly.

Of course, firewall configuration would be a tangential topic, overall. I'm at least familiar with so much of Linux firewall configuration as may be applied for network address translation.

Tangentially, a network firewall on a network gateway may also be applied for MAC-based peer filtering -- essentially, as to block unauthorized PCs from transmitting packets to services on the firewall, and to block unauthorized PCs from transmitting packets through the firewall -- in a sort of rudimentary "authorized network interfaces" configuration, such that can be implemented with bare-bones shell scripting, in Linux, or with further GUI configuration in a broader application design. Of course, it is possible to "spoof" a MAC address. So, I write "authorized network interfaces," rather than "Authorized devices," as it is possible,  though it would not be ideal, for a single MAC address to be assigned to two or more  network interfaces, possibly each on two or more separate network devices.  Certainly, there are broader network security frameworks available for network peer authorization -- as may be applied within a single network architecture -- including X.509 certificates such as used in a VPN framework such as  strongSwan 

Wednesday, May 14, 2014

Sidebar in a Laptop PC Recovery Task: FAI for Net-Install

In working with my "Newer laptop" with its unbootale (damaged) internal hard drive, usable Linux partition on-disk, but unbootable (damaged) MS Windows 7 partition -- the hard reboots that the laptop has made, especially under Windows 7, when overheating, in the filesystem damange on the MS Windows partition as resulting from the hard reboots, those have really impacted the laptop's usability, candidly --  I've tried using System Rescue CD, namely in the TestDisk utility installed on the same. In short, that wasn't enough to make the internal hard drive mechanically "Bootable," again. I've also tried using the same SystemRescueCD -- in this instance, successfully -- more or less to "boot" the Linux partition from on the internal hard disk, using SystemRescueCD booted from external DVD.

Today, I've noticed that it's not so much "Booting the partition," but rather that the System Rescue CD is running a chroot environment, via the SystemRescueCD's own Linux kernel and broader OS. Though that's fine for some qualities of using that Linux partition, but it may not enough for some features of the laptop's nominal usage case, such as: 

  1. Running an X.org session using the non-DFSG-compliant NVidia kernel  modules installed on the partition. Those are  not installed in the SystemRescueCD, and of course, the modules are not  successfully loaded under the chroot implemented by the SystemRescueCD's kernel. Consequently, I'm "Stuck with" the Nouveau driver for X.org and its grand 680x400 screen resolution on my Laptop's NVidia card (chipset GT218/NVA8)
  2. Ensuring that a block device is created under the chroot /dev/ partition, when any new USB flash media device is attached to the PC. I've had to run 'mknod' manually for that purpose, to create an 'sdb' device sufficient to back up my thumb drive's filesystem -- in the latter, using 'dd' -- ostensibly, previous to making a convenient "Boot partition device" out of the thumb drive,
  3. Audio configuration. Granted, my laptop has not demonstrated a great amount of success, at that, even when it could boot directly from partitions on the internal hard disk drive.
  4. "Etc."
So, in order to ensure that that laptop is bootable, even with its damaged internal hard drive, moreover that it would be booted from a kernel and OS  such as I would be able to update, "on the fly" (as is not possible, when booting from an optical storage disk) I'm looking at a possibility of installing a Debian Linux OS onto my thumb drive.

Though it is a bit of a sidebar in regards to that task, specifically, I've found something called FAI, the Fully Automatic Installation system. FAI is essentially a framework for network-managed OS installation. It could be used as an alternative to the BeagleBone Black "Flash microSD" approach, for installing an OS onto a BeagleBone Black. A Linux installations via FAI would need that there would be an FAI server, such that must be available on the local network, in addition to that the install client would need to boot from an FAI install client image. It could be great for managing a network of, if not a cluster of single-board computers. Of course, the latter should be either "flash upgraded", for intsalling the FAI client disk on the on-board storage (cf. BeagleBone Black) or booted from external media containing the FAI client image, in order for an FAI managed installation to proceed. So, of course, it would not be an "automatic installation,"  in that much – an FAI client disk image must be available to the installation client.

I simply wanted to note that, here, before continuing a search for any sort of single-node management tool for creating a USB boot disk. 

In an alternate reality, perhaps I would be able to begin designing, presently, a novel kernel build and kernel distribution platform for the contemporary local network, however in "This universe," I seem to have chosen, unwittingly, to live in an absurdly liberal area in California. In a year's time, the area has become even more grossly so. In every experience I have had with "society," in the local county and throughout the broader north half of the state, it reminds me of the cautions of unhinged liberal agendas, at best, or "The end of days," very soon.  It is an area that I shall leave, indefinitely. I cannot imagine any more likely outcome, except that I leave. The patently, moronically asinine theatrics of an unhinged liberal community, it is "Too much," after a time.

Saturday, May 10, 2014

Names, Name Syntaxes, and Namespaces – A Survey

With apologies for the poor formatting, in the article's presentation within its style in the Blogger web service.

I've begun developing a small proof of concept, for a URI parser implementation …  in which each of the following concepts would be defined:
  • Names
    • Name: Identifier of an information resource, within an information service
    • Application: URI – applications in URI decoding, storage, and encoding
      • HTTP user agent implementation.
      • URI processing and handling, in defining a Resource desktop model
        • URI Schemes for practical resource reference 
        • Application architecture, ad hoc model:
          • URI query model
          • Resource proxy model
          • Resource presentation model
          • Bibliographical reference model
          • Literary publishing model
          • Archival research (practical, cf. national and local libraries, historic landmarks, historic narratives)
    • Application: Registration of UML named entities within a UML modeling system
      • Namespaces and names in the UML metamodel 
      • UML NamedElement metaclass, UML Namespace metaclass [UML Infrastructure Specification (UML 2.4.1) subclause 11.7, Namespaces Diagram]
    • Application: Registration of named objects, in repository services published by a CORBA Object Request Broker
      • Referencing CORBA 3.3 specification,  Part 1, Interfaces 
        • CORBA IRObject interface
          • Subclause 14.5.2, IRObject
        • Interfaces and Containers Within a CORBA Interface Repository (IR) (CORBA 3.3)
          • CORBA IR
            • Subclause 14.5.4, Container
            • Subclause 14.5.3, Contained
          • CCM ComponentIR
            • Subclause 14.6.1, ComponentIR::Container
        • Object Identifiers in CORBA Services
          • CORBA RepositoryId Interface
            • Subclause 14.7, RepositoryIds
            • Formal ReposiroryId Syntaxes
              • Identifier syntax: "IDL"
                • RepositoryId structure, from a perspective of IDL processing 
                  • IDL RepositoryId prefix part
                    • Subclause 14.7.5.2, The Prefix Pragma
                    • Subclause 14.7.1, IDL Format
                  • IDL RepositoryId version part
                    • Subclause 14.7.5.3 , The Version Pragma
                    • Syntax: "major.minor", each an unsigned CORBA short integer
                  • IDL RepositoryId name part
                    • IDL scoped name
                    • Subclause 14.7.1, IDL Format
                    • Subclause 7.20, Names and Scoping
              • Identifier syntax: "RMI"
                • Subclause 14.7.2, RMI Hashed Format
                • Java class name, hash code, optional version designator
              • Identifier syntax: "DCE"
                • Subclause 14.7.3, DCE UUID Format
              • Identifier syntax: "LOCAL"
                • Subclause 14.7.4, LOCAL Format
          • CORBA ObjectId interface 
        • CORBA Portable Object Adapter (POA)
          • Specification subclause 15.3.9, POA Interface
            • Subclause 15.3.9.18,  create_reference
            • Subclause 15.3.9.19,  create_reference_with_id
          • Specification subclause 15.2.4, Reference Creation
  • Name Syntax 
    • Subset: URI syntax 
      • URI as meta-syntax
      • One URI syntax per each URI scheme
      • URI Scheme registry: IANA – Assigned URI Schemes
      • Highlighting:
        • The "urn" URI scheme
          • The "urn" URI scheme as a meta-syntactic subset within URI meta-syntax
          • Implementing URN namespace syntaxes as registered in the IANA URN namespace scheme registry
        • Hieracrhical and "flat" name syntaxes
          • Hierarchical name syntax implementation 
            • e.g. An http, https, ftp, or file URI interpreted in a service context
          • "Flat" name syntax implementation 
            • e.g DNS URI [RFC4501]
            • any URI, in a context in which the URI is interpreted as an XML namespace [XMLNames]
              • Given a URI interpreted exclusively in an XML namespace context, the URI's structure is not applicably relevant for the URI's interpretation as denoting an XML namespace
                • URI schema, URI syntax, URI escaping: Beyond the scope of URI interpretated as XML Namespaces
        • Application Concept: Given a textual URI, O, Initialize an URI object N of Syntax A and a metaclass M, such that a N will be an object of type M, and M may be located by the syntax A, within a single names implementation 
          • URI syntax provides that the schema of any absolute URI shall be denoted in the URI. The URI scheme S of URI O may therefore be determined from O, and the metaclass M located, with S applied onto a discrete index of URI scheme implementations.
          • Once the metaclass M is located, then M be used to initialize an standard object N, using CL:ALLOCATE-INSTANCE
          • Once the standard object is initialized, then N may be used for method dispatching, in a manner of iteratively consuming O for initializing N
            • The application may observe, furthermore, that Common Lisp indirect arrays may be used as to effectively initialize N without necessarily creating "new" native string objects in parsing the text string, O – thus, ostensibly minimizing system RAM usage in URI decoding,
          • This methodology may serve to allow an application to dispatch methods onto N, for purpose of decoding O as A, in initializing object structural components A'1…A'n 
            • e.g A'1…A'n for an HTTP URI [RFC3986]
              • A'1: network host name (i.e DNS domain name or network host address)
              • A'2: Network service port
              • A'3: Service user name in HTTP URI
                • DEPRECATED: Plaintext password in URI userinfo subset)
              • A'4: URI "directory path" elements 
                • i.e. URI path subset to last forward slash
                • Represent as null if path is zero-length
                • To do: Define semantics of not using NULL as a slot value type.
                  • Within an application of CLOS:
                    (WITH-SLOT-VALUE (VALUE OBJ SL) 
                         (BEHAVIOR-IF-SLOT-BOUNDP (THE* (SLOT-DEFINITION-TYPE SL) VALUE))
                         (BEHAVIOR-IF-NOT-SLOT-BOUNDP OBJ))
                  • Compiler optimization onto slot value types
                    • Applicable only if a class' definition will not change throughout the Lisp host runtime session (i.e TO DO: Define MOP intransient class extension) and 
                    • Applicable only when a class' definition is available to the compiler, as a finalized class definition 
                    • May be implemented via a compiler macro, such as would be defined as to compute the slot value type of any specific slot of any finalized, intransient class, in such as may allow for further optimization by the compiler. 
              • A'5: URI "file part" element 
                • i.e. URI path subset following last forward slash and preceding anchor or query subsets
              • A'6: URI "anchor"
              • A'7: HTTP URI (HTTP GET) query elements – (OR NULL (SIMPLE-ARRAY (CONS STRING(OR STRING NULL)) * )
  • Name Syntaxes and Namespaces
    • Concept: A namespace as representing a container for names of a single name syntax
      • Applications onto URI meta-syntax
        • One name syntax per each URI scheme
        • One name syntax per each each URN namespace
        • One name syntax per each Info URI info scheme
      • Applications onto UML, MOF, SysML
        • One name syntax
        • Arbitrary number of names and namespaces, within each model
      • Applications onto CORBA
        • RepositoryId format "IDL" as effective name syntax
      • Applications onto XML
        • One XMLNS namespace per XML element node. Contents: Namespace prefix, URI bindings. Should be indexed on URI prefixes (element, namespace assignment) and namespace URI (namespace prefix initialization)
        • Namespace equivalence : Dependent on Namespace URI content, not dependent on URI structure.
        • Namespace URI may be referenced by way of namespace prefix
        • Similar behavior for entity naming in RDF, OWL
        • Namespace inheritance (XMLNS)
        • Duration/Lifecycle: Each XML infoset's XMLNS namespace(s) may be mem-freed via scheduled GC, after the root XML document node and subnodes are no longer needed for presentation or other infoset processing, and not referenced from within any single names index.