Monday, October 19, 2015

The Day When the HPAN Open Book Project Began

Intro humor: Wondering if it may be possible to develop a dedicated web browser submodule in an FPGA circuit for a single-board computer. This question, of course, entails a short review of resources on the Web and about the SysV shared memory (SHM) architecture, such as implemented in the FreeBSD operating system -- in short synopsis, because CORBA CDR and applications of SysV SHM in inter-process communications onto a single physical machine architecture. This article will resume another topic, shortly.

Gecko + Firefox at 131.74% WCPU
This morning, there was a certain announcement published on Twitter, as with regards to a certain discovery likewise published by the Swiss National Science Foundation (SNF), viz a viz: A new electronic component to replace flash storage. In the original article that I had seen mentioned about the news, at the Twitter social microblog service -- viz a viz, Swiss researchers have created a memristor with three stable resistive states (newelectronics) -- the research is attributed to researchers at ETH Zürich. In some regards, juxtaposed to ETH's web site, the article published at the SNF's web site might seem overall more informative about the scientific development of the discovery. A short search for "memsistor" at the ETH web site does not reveal any search results.

In both articles -- the first, published at the SNF web site, and the second, published at the newelectronics web site (UK) -- in both articles, the discovery is referenced as in a context of reprogrammable memory-oriented storage.

Of course, it would be a short semantic leap from such a topic, to a topic of solid state device (SSD) storage modules -- as, in applications, SSD modules serving as certainly a common feature of mobile computing appliances, of contemporary "Rack mount" server architectures, and sometimes also in laptop computing architectures.  The repercussions of a new three-state method for data encoding within reprogrammable memory-oriented storage, it could be profound simply in the storage manufacturing industry.

What struck me about the article: The discovery essentially describes a three-state mode of logical voltage analysis -- principally, as a matter distinct to the contemporary "Tri-state" voltage analysis, that as with regards to "voltage high", and "voltage low" states, and a mysterious "high-impedance" state of a discrete electrical circuit, in a design-oriented view -- that rather, the discovery introduces a mode of logic in which a whole new unit of measure of information is required, the "Trit", a three-state bit-like unit of measure for information.

Considering -- albeit, in a manner of broadly foreshortened synopsis -- considering the broad range of conventions developed, to the contemporary "State of the Art", developed as with regards to discrete binary states of logic, in a manner of a binary voltage-state model of discrete circuit analysis, the "Trit" -- in its applications -- along with the corresponding memsistor technology developed -- it would seem, jointly developed -- by ETH Zürich and the Swedish National Science Foundation, these could substantially affect the very nature of electrical circuit design, as primarily of circuits implementing a conventional binary voltage-state model in circuit design.

It might seem like onl;y a small item of news, a mere quantum of popular press in a very large information space of the contemporary Webs. It is, potentially, a discovery of momentous significance -- significant not only in with regards to prospective designs of computer hardware, but furthermore significant as in regards to the essential nature of logical/mathematical models applied in circuit analysis and circuit design.

The author of this web log article being, perhaps, something of a "Rogue scholar," maybe it could seem convenient therefore that the author is in any ways of a disposition to be able to observe the significance of the discovery. Not as if to attribute it to any manner of any manner of a national stack of industrial laurels, however. The discovery is profound.

Focusing about some topics as commonly referenced with regards to mathematics developed of the contemporary electrical sciences, to the contemporary "State of the art" in electronics -- voltage, current, resistivity, "and so on" -- considering that any new development of the "State of the Art" must necessarily proceed from a number of previous developments in the "State of the Art", it may be possible to develop at least an estimation of how a "Trinary logic" could be applied in circuit design. The author of this article -- perhaps, stretching a little far, semantically -- the author of this article estimates that it could serve to introduce a manner of a spherical model of mathematical analysis of electrical circuits.

Short of delving into a very visual illustration: Conventional electromagnetic waverforms can be rendered -- as in a voltage analysis -- rendered for a time-series presentation on a Euclidean space of coordinate (t,E) for t representing time, E representing voltage, and the Euclidean coordinate space being presented in as a rectangular coordinate plane. In an alternate model for voltage analysis over time, E can be rendered as a polar radius, t as a polar azimuth, and the continuous voltage waveform illustrated -- whether instantaneoiusly, or in in a computationally interactive manner -- illustrated as in a projection onto a polar coordinate plane.

Albeit -- the author sifting through his own thesis, presently -- the discovery at ETH/SNF does not itself introduce any new measure of fundamental electrical information. It does not any add new -- so to speak -- any new greek letters to the formula of Ohm''s Law. Thus, it might not seem sufficient to introduce so much as an "iota" of a third vector element -- pun intended -- to the (r, theta), or (E, t) analysis of voltage over time. Thus, perhaps it may not be immediately representative of any new manner of spherical  model of anything, per se.  Simply, the guesstimate -- so to speak -- as towards a spherical space for analysis of electrical systems, it might derive only of the author's own small effort for estimating a "plane" for a "third state", in a trinary voltage model -- assuming there is a plane on which a binary voltage waveform may be rendered -- which there is, as to a presentation in terms of voltage and time, not so much immediately in terms of the conventional binary logical voltage model, such that is extended -- as in an orthogonal, if not transitive manner -- such a binary model originally being extended to a concept of binary voltage analysis, principally extended to a concept of an information content in and of voltage states, in a direct current (DC) circuit.

The discovery of an application for a trinary logic in a "real world" electrical system -- as towards an estimate of potential applications, simply -- it may most certainly entail a consideration of what a voltage state "Means," in a circuit.

Beyond the perhaps simple analysis of a polar state of voltage -- as with regards to theoretical "Electron surplus" and "Electron deficit" states, in a charge-oriented/kinetic theory of circuits -- albeit, the polarity of a charge source, as may be a property of and in a continuous, alternating current flow, it might not seem to be immediately "Factored in" to a discrete logical model of circuits. In a shorter phrasing: Polarity may not typically occur as a concern, in digital circuit design. Logical circuits typically operate on direct current -- ideally, as with no reversal of current polarity occurring, in a logical circuit.

Considering the "high/"low" or "on/off" state of a discrete signal in a DC circuit as it being a single, discrete state or quality representative of the DC voltage of the discrete signal -- as onto any single voltage-level model, whether of industiral conventions in Transistor Transistor Logic (TTL), or conventional CMOS logic, or in any of the newer low-power logics typically found in applications of mobile appliances -- perhaps it could seem to greatly complicate the manufacturers' responsibilities for circuit design, if as to introduce a trinary voltage state model.  That it could -- in ways -- that it could positively affect the overall "Information bandwidth" of circuits, perhaps that  might be sufficient as to retain the manufacturers' attention to the topic.

If a unit of information may be measured as in a base three or trinary model of voltage states (E_0,E_1,E_2), the third logical state of the trinary model would not immediately "fit in" with either of the TTL or CMOS voltage state models. E_2 would need to be defined with an "acceptable voltage range" , as much as E_0 and E_1 are presently defined of to an "Acceptable voltage range" in the present TTL and CMOS models.

This is all diverging, although, from the author's own novel project idea of making "A thing" out of the geometry model in the Common Lisp Interface Manager (CLIM), seconded with an applciation of the Garnet KR subsystem for a design of an algebraic system ... and supported with a project the author of this article now denotes as the "Open Book" project, manageably a project developed under the Hardpan Tech label. The project has existed for only a half of a day, and it is already at full momentum ....

Perhaps, more "Updates" will "Follow," soon.

In albeit a simple sense, the discovery may introduce a new manner of meaning about analysis and design of information-carrying circuits -- such that could be, as a category of circuits, juxtaposed to so many mechanical work-producing circuits as may be applied in in electrical mechatronic systems, as well as photo-electrical circuits, such that may be applied in any typically non-central manner, in solar-electrical generator systems.

The author will resume an imitation of a decorative potted plant, presently.

Saturday, October 17, 2015

Late Announcement of a Fork of CLORB, and Documentation Design, a CTags-to-DITA Model, and a Concept of Security Policies for Common Lisp

On reviewing a set of notes I've begun developing as towards producing an unambiguous outline of concepts as applied with regards to material sciences and computing, then in considering a possibility of developing a modeling service in extending of the topical outline of the article with models of tangible computing machine designs -- in no radical estimation of concepts of intellectual property, simply focusing on a modeling view, this morning -- I've returned to a fork of CLORB that I had created at GitHub, presently named hpan-dnet-corba. The name of the fork is derived of the name of the Hardpan Tech projects set, as well as a concept of a distributed data network. Presently, I am fairly certain that the repository will be renamed. I believe that I may be fairly certain that this will not interfere with anyone's present work, in regards to software development -- the repository at GitHub, in its present state, has not been forked, starred, or "Watched". Neither have I been able to proceed to any immediate development of the codebase in the repository -- so far, directing my attention to other projects. Of course, GitHub will automatically forward any URLs on event of repository name change.

On reviewing the codebase of the CLORB fork, this afternoon, not firstly considering any of the immediate "TO DO" items -- for instance, to ensure that CLORB will apply a portable sockets interface, such as usocket, a portable threading interface of some kind to be determined, to the project, and a portable operating systems interface such as osicat, then to proceed to update the CLORB baseline for the latest edition of CORBA, as well as to develop an implementation of the CORBA Component Model (CCM) in Common Lisp, to include services for component assembly and component activation, moreover in a manner as may be compatible onto component definitions not written in Common Lisp, however compiled to any single object file format -- my most immediate concern, superficial though it may be, is that I do not want to "Get lost in the codebase."

Of course, that would not be "All of the project," either, as far as updating the fork I've begun of the CLORB codebase. Likewise, I would like to develop a set of Common Lisp metclasses for reflective modeling of the IDL definitions that will be implemented with the codebase. This, I am certain, would be relatively easy to develop, with a small modification of the IDL compiler, onto a specific namespace syntax for IDL in Common Lisp, and a compatible definition of object services for Interface Repository reflection in CORBA. This extension would depart from the traditional IDL binding for Lisp onto CORBA -- incorporating some functionality available in a Common Lisp dialect, so far as may be available of Common Lisp implementations including an implementation of the Metaobjet Protocol (MOP) as MOP representing an implementation, transitively, of the Common Lisp Object System (CLOS).

Furthermore, I would like to develop a concept of a manner of "Specialized dispatching" of Common Lisp method definitions -- if definitively possible -- such as for implementing an instance of a definition of an object method A operating on a parameter B, within an arbitrary class C i.e C::A(B), such that the method definition is translated to a method A having a lambda list with specializers (C B) in Common Lisp. For instances in which A is not specialized onto any class D, then its unique application in C may be collapsible to a specialization onto B. Of course, if A would later be defined with a specialization onto any class D, then its implementation should need to be "Un-collapsed" to allow for dispatching onto both C and D. This, of course, might entail an unconventional extension onto a MOP implementaiton, itself, but it could be developed as to be portable onto MOP. Considering that any possible "Un-collapsing" would be performed at component load time, it may be minimally expensive as in regards to computational resources, while allowing -- ideally -- for something of an optimization in regards to runtime method dispatching. A to whether any further "Dispatch collapsing could be performed ... but this should all be proceeded by an in-depth study of the respective MOP implementation. Presently, though I may wish to assume that a MOP implementation is already implemented to its optimal semantic and procedural effectiveness for standard method dispatching in Common Lisp, but the nature of the conventional IDL-to-Lisp binding -- I think -- may seem to suggest that an even more optimal model may be possible. Not as if to split bits over a matter of byte sequencing, I think it represents a useful goal for a CORBA implementation.

So far as with regards to a concern of object modeling, there could seem to be an irony -- that here I am beginning to consider to "Put the wheels to the road," in a manner of speaking, to proceed now about CORBA development in Common Lisp, and to proceed as towards a purpose of developing a no doubt intellectual property-agreeable model repository service ... and yet that I may be unable to develop a model for this project until having produced this project to such a point as in which it would be applicable in a modeling service, or either, in perusing the codebase manually.

So, there is an exit condition from the semantic loop of that concern -- namely, as to read the source code. Again, though, I am at the concern to not "Get lost in the source code."

In extending of a concept of "reading the source code", of course I would also want to begin to develop a comprehensive reference about the source code, namely in a documentation format external to the source code. Personally, I would not prefer to develop such a manner of reference if with an HTTP virtual filesystem service intervening, in which a local filesystem service may be sufficient.

There's a side note about the reStructuredText (RST) format that could seem apropos, inasmuch --  RST offering a certain number of syntactic features effectively extending of the set of markup types available in the Markdown format. With GitHub providing instantaneous RST-to-HTML translation, and though it may not be the most computationally efficient process to  not write the documentation originally in HTML format and publish it likewise in HTML format, but text-oriented markup formats may typically be more succinct than HTML, and would probably be more "Friendly" to editors not familiar with an XML format.

Alternately, it may be feasible to develop a DITA formatted topic repository about the original CLORB codebase, then to update the same topic respository with any later notes as may be added onto any reference elements generated in the immediate Lisp-to-DITA translation. Thus, as this is not an enterprise project, and it does not have an enterprise management base to manage it by, but with it representing nearly an enterprise scale of endeavor -- in a small manner, as it might seem -- it can be approached functionally and manageably, to so much as document the existing CLORB codebase, even if in a manner as that the documentation may be intermediate to any updates of the codebase.

Of course, to keep the documentation synchronized with any changes to the source code, it would need an attention to both of the documentation and the source code, as if simultaneously, and throughout the duration of any updates to the source code.

Much of the documentation might be generated, initially, with an application of CTags -- if not of an extensional tool, such as Exuberant CTags -- then with an application of a transformation model for generating documentation from a set of templates, such as may be applied to the tags lists generated by the respective CTags implementation. Such a procedure, of course, could be performed onto any single language supported by the respective CTags implementation, given any suitable set of document templates. It might not be in all ways analogous to LXR or Doxygen, though accomplishing a result in some ways similar to Doxygen -- namely, a structured reference about source code forms -- though ideally, producing documentation files in a structural format resembling the Common Lisp Hyperspec, such that may include -- by default -- the contents of any available documentation strings, and such that may be extended, potentially, with source code references -- and a corresponding URI transformation -- in a manner analogous to LXR.

Thus, it might produce not so much of an IDE-like web-based presentation for linked source code review, rather producing a sort of "Skeleton" -- 'tis the season -- for support of documentation authoring onto an existing codebase. It would not presume to provide a complete set of documentation files, but merely a skeletal documentations structure -- such that could then be edited by software developers, such as to add any  documentary information that would not otherwise be available, immediately, in the source code. In a sense, it would may as to provide a manner of a source code annotation service, but with the annotations contained in documentation files, not directly in the source code.

In regards to a design of a template model for application in such a manner of a documentation skeleton generator tool, it might be beneficial if the documentation and templates may be maintained -- in some ways -- separately, with a semantic linking model as to ensure that the documentation may be automatically "Linted" for compatibility across any changes to the source code -- "Actively linted," moreover, such that if an object is renamed in the source code, its documentation will be renamed, and if removed, then its documentation removed, and if any new features would be added, that a new documentation stub would be added for each feature.

Speaking of features, in a context of Common Lisp, some features may be difficult to "Parse for," however -- the Common Lisp feature syntax itself, for instance, such as "#+quux or "#-quux" or any more complex expressions such as "#-(and quux (not quo))". Perhaps it may be in no small sense of coincidence, if such expressions might -- in ways -- might resemble something like a C preprocessor syntax, moreover being evaluated in a manner -- namely, at the nearest approximation of "Compile time" in any Lisp reader/evaluator procedure  -- then in a manner analogous to how a C toolchain evaluates a C preprocessor directive, but minus any analogy of of macro syntax and evaluation.  In a sense, it is like the Common Lisp read/eval/print loop (REPL) applies a preprocessor in the reader component, intermediate to a computational evaluation of forms read by the reader, then any printing of return values or stream output values as may result of the evaluation. It might seem, in some ways, "More tidy," but a whole lot less common than the langauge's name might seem to imply.

So, together with such a short sidebar about tool stacks in C, continuing ... the documentation system, if it can update the documentation files in parallel to any updates observed of the source code itself -- maybe it could be presented to market as a manner of a "Smart" documentation system, but aside to so many concerns of marketing -- if not updating the documentation tree in response to any changes in actual definitions of compiled objects, then as long as any "Manually written" documentation is maintained in a manner separate to any "Generated structural" documentation, the "Manually written" documentation can be presented for update, corresponding to any change in the structural definition of an object.

It might seem computationally frivolous, perhaps, to propose to keep a documentation tree simultaneously linked with an object system, and the object system's source code and documentation tree both mapped onto a filesystem managed under a Software Change and Configuration Management (SCCM) service. It's certainly a small toss from the CTags-parser paradigm, but it may be only a small toss inasmuch. The most computationally expensive aspect of such a feature, it   may be in simply monitoring any source code file for changes, then detecting which definitions a change applies to, then processing the documentation about those definitions such as to reflect the change in the source code -- likewise, maintaining a manner of a table between object definitions and source forms, such that if a compiled definition is replaced with a new definition, the developer may be presented with a set of convenient, if not in ways pedantic options for modifying the documentation about the original definition.

Of course, considering that an object's definition, in a compiled form, may not be so much "Changed" in its compiled data, as much as "Replaced" with a newly defined object of compiled data, it would certainly need some implementation-specific modifications to implement this albeit ad hoc proposal, in full -- that the software system could be programmed to detect a change in the definition of a named object, and if maintaining a definition-source state about the name of the object (as some Common Lisp implementations may, at developer option), that the detected change could be noted in the software's program system, then followed with a query to the developer by some manner of an interactive prompt.

Towards developing a programmed security model onto Common Lisp, the very fact that a Common Lisp implementation may allow any item of code to redefine any existing item of code -- sometimes, as optionally filtered with "Package locks" -- we must assume that all of the software code having been evaluated by a Common Lisp implementation is instantaneously trusted, moreover that not any software will be evaluated that is not trusted -- an oblique sense of "Trust", by no means programmatically defined. Perhaps the security policy model defined in Java could seem to be of some particular relevance, at that, short of any ad hoc and distinctly not code related approaches to ensuring a manner of discrete security of software code and program data.

By no means will this project represent any manner of trivial convenience. Even in the simple effort of developing so much as a design for a documentation system, it somewhat apparent that there may be some "Lower level concerns" -- such as that the Common Lisp language development ... may be behind in a few updates, as with regards to the "State of the art" in commercial software development, quite candidly. Though Common Lisp is a computationally comprehensive programming language, but if Common Lisp may be applied within a secure, trusted commercial communication system -- firstly, we may wish to consider, each, our own integrity as to how much it is a trusted programming language, juxtaposed to any programming language as may provide a distinct level of security policy definition and of security policy enforcement, ostensibly with such security policy features being applied throughout commercial software systems.

The author of this article is not one to place any chips on the table, before an analysis of such a concern.

It may be not as if Common Lisp was vastly behind other programming language -- short of anything in regards to "Warm fuzzy" marketing -- but the security policy issue, it may be approached perhaps without any too broad sweeping changes to any single Common Lisp implementation.

So, but there was a discussion about documentation, in this article -- albeit, an in many ways breeezy, verbose discussion -- an in-all-ways a rhetorical discusssion, likewise lacking any great presentation of detail. This article describes a manner of a semantic model for working with documentation and source code, in parallel. This article does not go to great lengths for a description of the DITA format, or XML Stylesheets, or the Document Object Model .

Presently, this article returns to the original topic, of generating documentation from CTags files. The topic of IDE-to-source-code-to-object-definition linking should be approached with a manner of a later demonstration, but first there would need to be an IDE compatible to the demonstration. Secondly, the topic of how-to-prevent-unwanted-object-redefinition-scalably-and-well could be  approached of any much more detailed analysis.

Towards a manner of an event-oriented model in regards to definitions in Common Lisp programs, appending a few ad hoc notes:
  • Types of Program Objects, "Program Top Level"
    • Variable Definitions
    • Type Definitions
    • Class Definitions
      • Structure Class Definitions
      • Condition Type Definitions
      • Standard Class Definitions
    • Functions
      • Standard Functions
      • Funcallable Instances
      • Generic Functions
    • Method Definitions
    • Macros
    • Special Operators
    • Packages 
    • System Definitions
    • Declarations
      • FTYPE Declarations
      • Type Declarations onto Variables
    • Closures and Closure Environments
      • Concept: Null lexical environment, i.e. global environment, as an effective "Top level closure"
      • Concept: Redefining a lexically scoped object defined in a non-global environment, A-OK ?
      • Concept: Redefining a 'special' scoped object defined in a non-global environment, A-OK ?
  • Events
    • Event: Program Object Definition
      • Instance: One of Defvar, Defparameter, Defconstant
      • Instance: LET
      • Instance: Defclass, or related CLOS, MOP protocol procedures
      • Instance: Defun
      • Instance: Defgeneric
      • Instance: Defmethod 
      • Instance: Defpackage
      • Instance: Defsystem or similar
    • Event: Program Object Redefinition
      • Instance: SETF  
      • Instance: SETQ
      • Instance: Object definition onto a previously defined object
        • Re-DEFCONSTANT: Implementation-specific handling [exists]
    • Event: Program Object Definition Shadowing
      • Not expressly 'redefinition', more entailed of both closure definition and component program object definition 
      • Synopsis: a lexical scope is defined in which a program object defined in which a new definition is created, in a manner as to  effectively shadowed a definition previously created -- a definition furthermore bound to a single name for the definition's program object type -- in a containing lexical scope
      • May be a part of a shadow => redefine procedure
      • May or may not be approached "Maliciously"
      • May produce unintended side-effects in software programs, e.g. if *STANDARD-OUTPUT* is shadowed as to pipe all data through a digital wormhole to an alternate universe
    • Event: Program Object Deletion
      • Note: Though defining a top-level interface for garbage collection, Common Lisp (CLtL2) does not define any single 'finalize', 'delete' or 'free' procedure, such as could be applied for dereferencing and deallocating objects manually
      • Instance: makunbound   (global symbol table)
        • Note: Whether or not this would actually result in the deletion of the program object, or merely in the "Un-binding" of the program object to any single symbolic name, may be implementation-dependent
      • Instance: fmakunbound (local function table)
        • Does not affect immediately any compiled, inline functions in which contexts the respective functions are compiled inline

Friday, October 16, 2015

Toolchains in a Key of C

In developing a lively, component-oriented view of software development -- allegorically, as beginning from a location of "the ground," towards a limit of "upwards" -- it may be logically reasonable to begin with a component, "The Toolchain."  Not as if to propose any singular, ad hoc definition of a concept of a topic so broad as toolchains -- and in this single article, as such -- theoretically, a definition of "The Toolchain"  begins with a definition of "The operating system." In the present State of the Art, that would likely entail one of: Linux, any single BSD -- for instance, FreeBSD, NetBSD, OpenBSD, or any BSD happily derived from either of those three "Main BSDs" -- or OS X, or Microsoft Windows.

Proceeding in a rough estimate along a timeline going backwards in relation to present time, previously the State of the Art would have also included Beos, NeXT, MS-DOS, IBM-DOS, CP/M, the Lisp Machines of yore, and any number of UNIXes whose development in any way chronologically parallels the same timeline. The Industry has had its trends, for a number of years, before Social Networking web log networks ever became such a popular topic as today, a topic how much for advertisers, Social Networking networkers, and the more of the social networking service user community. If assuming that we may say that the present State of the Art is the only State of the Art that has ever existed, in all known time, we might likewise be assuming as if life proceeds without a sense of historical context. Though that could be quite a trendy way to not view history, perhaps it may be understood that the present State of the Art has developed only of the previous State of the Art, at any moment of time. If we may leave aside so many stylistic brand names and endeavor to consider how the present State of the Art has developed, perhaps we can learn more of the present State of the Art, if not of any estimable "Future" State of the Art, by studying any works of the previous State of the Art. If that does not tire us fully, perhaps it may begin to seem that not all of the State of the Art may have developed as if only along any single linear chronological trend. Thus, even as if to analyze the architecture of an operating system comprising any manner of an obvious element of the present State of the Art, there may be a whole lot of "Previous work" available, such that may serve to inform the present discussion -- as even of so many discrete encodings of program codes onto punch cards, and applications of Teletype machines for other than radio telecommunications, and any trends marking the evolution of terrestrial semiconductor manufacturing methods. The State of the Art, clearly, being a material domain, though not exclusively of any single material vocation -- not even as if singularly of the many works of marketing, of works of media ever apparently seeking to draw a social attention in one way or another across the present State of the Art, if not furthermore to direct the viewer's attention to any single commercial product -- perhaps it cannot all be said to derive back to a material physics and a corresponding mathematics ever developed of any possibly more intuitive laboratory.

Inasmuch, it might not be said that all of the State of the Art derives back to knowledge, or knowledge deriving back to language, or everything under the sun deriving back to a simple concept of communications. Such naive theses, though presenting any manner of an immediate sense of perspective, may seem difficult to prove, to any detail, logically and at scale. Perhaps not all of the universe is merely a mote in the eye of a grand, benevolent narcissist, but it would seem that much of the known universe derives, at least, to a sense of information.

So, if we are to begin at toolchains, it might be expedient to skip ahead past the estimable origin of the physical universe, to leap a little ways across the evolutions of mineral mining and tool production techniques, to take a long way around the events of empires, piracy, and war, and hop on up to the present day, in which all operating systems may appear to be constructed of C or a programming language deriving of C, in terms of syntax, semantics, and evaluation procedures. The subtle leaning of Java over to anything like a Lisp -- even so far as of the lambda nomenclature of the Java programming language, edition 8 -- this might be ignored simply as an aberrant trend, nothing whatsoever arcing around to another method of systems design, nothing in any ways suggesting anyone had constructed any microprocessors either wrongly or in way merely keeping up with the industry's state at any point in time. Surely, every microprocessor must have an Arithmetic and Logic Unit, and every OS must be constructed of C or a dialect of C ... except for those that are not.

So, then -- taking some liberty to try to construct a light-hearted point of view of this thesis -- we may begin with the present state of the art in C toolchains.

...and the author will return to this thesis, shortly, with a reference to the K&R book, section 4.11, and no further aside about a story by -- estimably -- a satirist writing by the name, Ayn Rand.

For wont of expedience, this article will resume the discussion not at the development of the first C dialect, in 1971 [Raymond2003], and neither of an analysis of any market trends, ahead to which the GNU C Compiler Collection (GCC) first addressed the GNU Public License (GPL) to a Patents Industry, thirdly leaving aside any analysis of the complex interleavings of the LLVM toolchain and non-BSD operating systems including OS X and Android, lastly to an immediate, albeit in ways ad hoc overview about a generic model of a C toolchain, as to include -- in the albeit naive model -- a C preprocessor, a C compiler, and a C linker, such that the linker produces -- in a procedure of processibg certain intermediate compiled object files produced by the C compiler -- producing a loadable binary object file, such as may be later evaluated by an operating system, whether evaluated as a "runnable" software program having any single main() routine as its entry point for its launching as a software program, or evaluated as a library file for linking with other binary object files. This generic model may be difficult to describe to any detail, for how it may serve as model if the components of any single toolchain, with the addition of any more specialized and toolchain-specific components, abd an aside to address compiler components such as may produce an intermediate or loadable object file, from a source code language not C.

Of course, as well as those components of a C toolchain -- the preprocessor, the compiler, and the linker -- there is also the inevitable Makefile implementation, such that provides instructions to an operating system for how to "Put the pieces together" to any point of program evaluation, in producing evaluable programs. A Makefile interpreter, in some regards, might be cast in a metaphor of a mechanical chef.

Aside to the C toolchain, of course there are software programs that may -- in ways -- resemble a Makefile interpreter, such as the Ant program, in a Java toolchain, or the inimitable ASDF, in a Common Lisp toolchain, as of the present state of the art in Common Lisp system definition utilities. The author's novel thesis that all of these toolchains could be -- theoretically -- translated into a Common Lisp interpreter, it might seem too novel to be obviously relevant to the State of the Art. For all of the UNIX architecture developed in C, furthermore, it might not either be fortuitous to abandon such architecture for a Lisp Machine, if without making a comprehensive study of the exiting work.

Of course, not all of UNIX is implemented in C. In fact, the FreeBSD operating system uses a bit of Forth in its bootloader. Ever, there are these novel things that so impede a linear introduction of the State of the Art. Forth being a language as much allied to a concept of stack machines as is the Lisp implementation described in the AI Memo 514, in which the authors propose to develop a microprocessor absent an ALU, as well as proposing an implementation of Lisp, in how far absent to the going trends of CISC microprocessor designs, of industry at the time -- to the author's best understanding of such features of the State of the Art -- well so, but now we have C, C preprocessor langauge, Makefiles, and Forth, as well as anything else that may be compiled to a binary loadable object file, insofar as source code languages -- with a note in regards to intermediate object file formats, and loadable object file formats.

The author has read that there are criticisms of Lisp syntax. The author fails to understand, How can this be? Is it too far unlike the linguistic sandwich bar of the modern toolchain? Could it be, perhaps, too far unlike a CISC language?

On top of -- or, in another way below to -- C, of course there is the syntax of any single assembler.  Below the assembler, in a similar arc, any individual Instruction Set Architecture.

Not as though to begin a Lisp Advocacy thesis forthright, ironically there's something like an assembler defined in one Common Lisp implementation named CMU Common Lisp, the low-level compiler VOP framework of CMUCL being then inherited by Steel Bank Common Lisp (SBCL), with SBCL being originally a fork of CMUCL. How this may seem to parallel an evolution of a BSD operating system -- moreover that CMUCL's architecture may seem, in some certain ways, curiously BSD-like -- but it might not seem to contribute an obvious whole lot(TM) to the State of the Art, Immediately Today(TM) to make any too lengthy dissertation of such topics of systems evolution, and well would the author go out of depth to speculate of the similarity. No myth, no magic, perhaps an independent operating system can be developed out of Common Lisp, once more, but there is a dearthy lot of existing work to observe, if not to study, in UNIX systems.

Perhaps the author has begun to mistake this English language for Makefile syntax, if not merely a disposable lexicon. Of course, BPMN might be far more succinct, visually -- if not more likewise difficult to reproduce if discarded -- to describe a thesis topic or a recipe.

And so, the author must take another aside, with a glib and/or drab nod to the works of the grand satirists in literature. This article has now breezed across the whole C toolchain, topically, and here it is not even August yet.

Ed. Note: This article may be reviewed, at some later time, towards clarifications about compiler architectures, including: The nature of "Intermediate" compiled object files (e.g. *.s) whether present in C compilers, C++ compilers, or otherwise; the role of the assembler, in the procedures of the compiler.

Before commencing to present the hot topic of the evening's article – as of a simple illustration of two ways to produce object files, each of a popular though by no means industry-dominant programming language, and as such, to produce object files as without an immediate application of a C compiler – the author should take care to define, initially, what the term, object file, may denote – as in how the term may be defined, at least in a context of the media object comprising this single article, if not also of how the term may be encountered of other literature.

In a metaphor to granola … non. This thesis shall presently disembark to a discussion of machine architectures, focusing primarily about microprocessor architectures, specifically Intel, MIPS, and ARM microprocessors. This representing an adventurous aspect of the evening's thesis, a food with a suitable proportion of complex carbohydrates may be recommended … if not a draught of the evening's coffee, along with.


This intermission brought to you in a format of lyrical music

 
 [Article will resume momentarily]

Ed. note: For some intents and purposes, the Executable and Linkable Format (ELF) may seem to be "Enough to know about", as with regards to object files produced by compiler toolchains on UNIX platforms -- at least, so far as up until a point of actually developing a compiler [TO DO: FINALIZE ARTICLE] (NOTES)

 Ed. note: Though the Embeddable Common Lisp (ECL) Common Lisp implementation can be applied to produce object files, it is not without applying a C compiler as an intermediary component. Thus, the comment -- in the previous -- as if it was possible to generate an object file with ECL does not hold. Neither might it hold as if LuaJIT was not applying a C compiler, itself, in producing object files for the respective machine of its application. As stated in the previous article, the "Hot topic" of the evening might seem to be a "Dud," in such regards.

Ed. note: With regards to how ECL and LuaJIT may be applied with the LLVM toolchain, such a study may be addressed at a later time.

 Ed. note: Follow up with documentation about ctags, etags, Exuberant CTags, and llvm-clang ETags/CTags, as with regards to source code modeling and review. See also: Doxygen; UML; SysML; MARTE

Ed. note: The goal of this article was to develop a singular overview about compiler toolchains, as with regards to (1) how a compiler toolchain is applied as a component of an operating system; (2) how a compiler toolchain extends of any single microcontroller's supported instruction set architectures (e.g. amd64, SSE2, MMX; on GPU microcotrollers, lastly, CUDA).  Beyond such a description of existing work, in contemporary operating systems design, perhaps it may seem frivolous to endeavor to assert that a reproducably usable operating system may be constructed for contemporary microcontrollers, and without an application of a C toolchain.

DevOps Servers - Jenkins or DIY?

In developing a small concept of producing a DevOps server for the environment of a single Small Office/Home Office (SOHO) network, in the past couple of days I've reviewing a concept of installing Gitblit, JSPWiki, Roller, and Activiti, as web services, then to develop a minimalist web-based portal front-end for integrating those individual web service components into a single "User experience". These components would be installed, originally, to one of my old laptops, it serving a dual-purpose role as an old laptop retained of my own purchase -- now a manner of a sentimental artifact, sure -- presently applied as a FreeBSD server on a SOHO Local Area Network (LAN).

The local web-type services, of course, would not be the only features of the same server's Service mix, as it would also publish a Git service from within a FreeBSD sandbox. The Git service, of course, could be published from a sandbox maintained in a manner separate from the host's web server sandbox -- that, as with a small amount of software to provide a filesystem bridge, between the two -- both sandboxes operating on the same computing machine, however, with both being procedural isolated from the "Sandbox controller" server. Even in so much as a design for such application of the FreeBSD sandbox, i.e jails framework, it might already represent a manner of a component-oriented software service design, though it is not yet in every ways a detailed software service design.

I've favored this design, due to its minimalist nature and its development singularly in the Java programming language. I'd begun the design, after reading a comment -- from 2012 -- by James Gosling, a positive comment about Gitblit [Gosling2012PapaMau] Of course, a positive comment by one of the original developers of the Java programming language may seem to carry for some mileage.

In my own small software design work, proceeding from a review of the Gitblit web services component, I'd even developed a clever name for the design, a DevOps portal design for my own LAN, but such that should be scaleable beyond the "Single use" -- naming the design, "Glister," after an artifact of the Heritage Universe, a series of science fiction books written by a physicist, Charles Sheffield. In a context of the story of the Heritage Universe books, Glister is an Artifact that appears early in the story. In a context of a SOHO network, Glister has been -- thus far -- simply an easy-to-remember name for a single service design. As I've developed a substantial amount of writing about the design, in my Evernote notebooks, it is not a design I would want to abandon hastily.

The design of the Glister DevOps server, in some ways, it mirrors the design of the EmForge Portal. In a manner, both designs would introduce a Business Process Management (BPM) component to the conventional service mix of a wiki, web log, issue tracking, and web-based source code review service -- as also seen, with some slight differences, in Edgewall's Trac, different at least in regards to the implementation of each service of the respective service mix. Of course, Trac favors the Subversion change management service, as does EmForge -- whereas Glister would apply Git, with a Gitblit web interface providing a light-weight localized service for immediate web-based source code review.

Of course, the issue tracking feature might not seem as apparent as the novel "Other features" of the architecture.

In the Glister portal, Activiti would provide a BPM management interface. Thus, it may seem to effectively mirror the the BPM component of EmForge.

Considering that the "Issue tracking" features of the Glister architecture may seem -- in ways -- very much obscured of the novelty of some of the other components of the architecture, perhaps the "Issue tracking" service could not be the main "Selling point", if it would be presented as all of a "Free beer" model.* Regardless, I've estimated that it may be relatively easy to develop an issue-tracking front-end for Activiti -- whether to emulate Bugzilla, Request Tracker, GNATS, or any other normative issue tracking service -- such that would be developed, originally, for issue tracking about individual Ports, such as available on the FreeBSD operating system and such as would be installed to an individual SOHO network.

Though I am not inclined to present it as if it was any manner of a "Zero-Sum Free Beer Return" process -- and well would such a process be a novelty, in itself, of all the spontaneous things -- I suppose that I could try to market it is as so, whatever I may eventually be able to develop of the Glister server baseline. Not as if to exploit  "Free Beer," it is already a small effort at making use of a small number of existing software components, in developing a manner of an immediately "New" component -- so far, as to develop a "New", and in-some-ways "Unique" service design, then as to proceed towards an effort in producing and maintaining that design in its implementation, there in regards to some terms of real software integration, and documentation, and software distribution, and issue tracking. Maybe it would not seem immediately "Fun," in such a perspective.

I notice, presently, that the Jenkins server -- such that I may wish to denote as it being an alternative to an independent design of a DevOps server,  such as Glister proposes to implement, and such as is implemented of EmForge -- that the Jenkins server has recently found an application in the FreeBSD project. If it is a project trusted by the FreeBSD project, and if it may serve a role in mitigating my own development burden, and as I may happen to personally trust the OS distributed by the FreeBSD project, in a far ways, thus I've begun to consider applying Jenkins, albeit then to an effect of by-in-large abandoning the Glister service design.

Though the Glister service design, in its exact and present composition, has not existed for any long period of time -- this specific design has been in development for all of a couple of days, now -- I'd thought it might serve as a manner of a "Go-getter" project, though, as well as a nice minimalist design for a convenient web service on a local area network (LAN). The note about Gitblit, I'd thought, it had seemed to convey so much of the original goodwill that was ever demonstrated of the Java developer community, at least as in the duration up to which the Sun Microsystems company -- the original "Shop" in developing Java -- was acquired by Oracle.

Whereas the latter corporate institution, Oracle, may -- in some manners of a metaphor onto science fiction -- that the Oracle company might seem to resemble an archetype much like the character of CLU in the TRON: Legacy universe, and though perhaps I'm the only person seeing it as so, but in no ambiguous terms: I miss the goodwill of the original Java developer community. That a programming language such as was originally developed to an effect of a web browser plug-in -- with regards to so much as the Java applet origins of the Java programming language -- that a single programming language, as such, has weathered all the tides of Enterprise trends and gone so far as to find an application in an embedded/mechatronic architecture presently continuing an expedition to Mars?  Who could have expected such an outcome of a Java applet programming language?

In any linear, even post hoc analysis, how could such a thing have become? and what has been lost of the goodwill of the original Sun Microsystems developer community, in the years since the acquisition by Oracle? Moreover, how much of the original brainpower of Sun, in effect, had "Jumped off the ship" once the Oracle acquisition was finalized? and today, does Oracle still try to discredit the nature of free/open source software engineering, but that may be where they could find any of the staff that left Oracle? Have we not learned anything of this process, as yet?

Towards considering how the Glister service design might scale beyond a context of an individual LAN, it may be -- in that context -- that I might wish to entrust the Jenkins web services as for those web services to not only present a novel web-facing interface, but also ... but no, it may be simply the novelty of its web-facing interface that would draw my own attention more to Jenkins, as any alternative to the minimalistic design of the Glister service mix.

Candidly, I am a little worried about installing Jenkins on my own SOHO LAN, as -- even with its full free/open source codebase -- I do not know if it is such a kitchen sink I may actually need to install.  Not to discredit its component-oriented design, though I am in any ways nonplussed by its marketing. I do not know if a Jenkins instance would actually "Do much" on my LAN, except if it may represent something of a novelty -- and yet, would the Glister design be any less so? If it may be any less of a novelty, and if it may be any more of a producible gain -- to my own manners of personal perspective -- but even if it may need "A lot more welding," as to "Put the thing together," for the "Thing" being a "DIY" component-oriented design of a lightweight DevOps server, maybe it's not too far past the sunset of Sun.

Personally, I think that a design strategy of "Everything and the kitchen sink" would not be ideal for a design of a light-duty/low-usage software service for an independent network services environment.  Thus, personally, I've begun to "shy away from" so many Java Enterprise Kitchen Sink Portal architectures and the kitchen sink style of DevOps services, likewise, in considering any "Forward" designs for network services and -- in that context -- also web services.

I wouldn't want to seem too hasty in abandoning the concept of applying  Jenkins, immediately. No sales lost of it, I would prefer to resume the Glister service design, and to keep my design table "Lightweight."

* The phrase resounds, even of free/open source software component systems: Caveat Emptor

Tuesday, October 13, 2015

Why Open Source Operating Systems: Commercial-Free Developer Support and Technical Documentation

Perhaps one of the greater draws about software development with free/open source operating systems -- such as GNU/Linux, FreeBSD, Open Solaris, or the most of the Google Android and Samsung Tizen platforms -- perhaps, one of the greater draws may be found of a simple estimate of technical developer support, such as might be estimated to be in greater abundance of and about free/open source operating systems. Although -- candidly -- the developer support resources available of free/open source operating systems may not seem to be as heavily marketed as with commercially licensed, closed-source operating systems -- such as of the Microsoft commercial presence behind the Microsoft Developer Network (MSDN), or the Oracle presence now backing the Solaris operating system, Solaris being originally a Sun Microsystems product -- but with a certain amount of attention and of simple resourcefulness, it may be possible to locate and to utilize some many of the resources as may be available for developer support about free/open source operating systems.

Developer Support in Free/Open Source Operating Systems

Towards developing a manner of a topical overview about developer support resources as may be available about free/open source operating systems, a simple outline:
  • Documentation
    • Tutorial Documentation
    • Reference Documentation
    • Software Distribution Service Data
  • Software Development Support Tools
    • Compiler Toolchains
    • Integrated Development Environments
    • Developer Utilities
    • DevOps Tools
      • Software Configuration and Change Management (SCCM) Tools, as a subset of Software Distribution Support Tools
      • Build Automation Tools
      • Software Distribution Utilities
      • Issue Tracking Systems
      • Whiteboard Tools
  • Developer Forums
    • Mailing Lists
    • Bulletin Boards
    • Social Networking
  •  Source Code
It would be beyond the scope of this simple article, to develop any manner of a comprehensive reference manual about these topics. In so far as of the simple process of developing an outline of these topics, there -- in itself -- could be an outline towards beginning to develop a reference manual, as such. The media and format of such a reference manual, however -- secondly, the topical scope of such a reference manual -- may be behooved of some specific consideration.

Reference Documentation - Towards the Core of Software and Systems Literacy

In regards to reference formats and reference media, the author of this article can denote a small number of topics offhand -- variably of media types and media distribution services
  • Wiki
    • Concept: Web-based topical discussions
    • Contents:
      • Wiki Pages, formatted as HTML
      • Topical Taxonomies, representative of wiki page linking structures
    • Availability: Public Internet, typically
    • Corresponding Concepts
      • Bibliographies
      • Resource References
      • Web-Oriented Peer Review
        • See also: Wikipedia - Articles - 'Talk' Section
  • Project Reference Documentation
    • Concept: Reference documentation developed of single projects
    • Availability: Variable
  • TeX Info
    • Concept: Narrative and Technical Reference Documentation
    • Availability: Typically available via shell command line, 'info' shell command, such as may be available on any single operating system; may be available in alternate media formats (PDF, HTML)
  • Manual Pages
    • Concept: Technical Reference Documentation
    • Availability: Typically available via shell command line, 'man' and 'apropos' shell commands; may be available in alternate media formats
  • Academic Dissertations
    • Concept: Philosophical Overviews and In-Depth Studies of Technical Topics
    • Availability: Variable
  • Technical Journals
    • Concept: Market Information and Technical Overviews
    • Availability: Journal publishers; libraries
  • Tech Books
    • Concept: Friendly overview literature about technical topics
    • Availability: Books sellers; book services; libraries
  •  Tech Encyclopedias
    • Concept: Topical reference surveys about technical topics
    • Availability: Books sellers; book services; libraries


Introducing DITA, Obliquely

Towards a manner of applications of a single reference documentation format, it may be possible to develop an application of the Darwin Information Typing Architecture (DITA) in a context of any one or  more of those topical categories.  DITA is a standard format for technical documentation, standardized in publications from OASIS [DITA 1.2]. In a simple estimate, it may seem that DITA may be most often applied for developing documentation about products of individual commercial enterprise institutions. However, DITA may find an application furthermore in documentation about free/open source products. DITA may be juxtaposed, functionally, to the DocBook technical documentation format [DocBook.org]


Web to DITA - XSLT and Semantic Wikis

With regards to a Wiki as a manner of a reference model, it may be difficult to represent all of the depth and meaning of DITA markup within a Wiki markup language. Certainly, some of the DITA schema bears a close resemblance to HTML -- such as with regards to DITA inline markup elements broadly for specifying an italics, bold, or underline markup, and DITA structural markup elements for ordered lists and itemized lists, juxtaposed to any functionally similar markup elements in HTML. Such "HTML-like" elements in DITA might be easily transformed both to and from any conventional, typically HTML-oriented Wiki markup language.

The  more semantically specialized DITA elements may not seem meaningful in a Web Media model, until having been processed, mecahnically -- as in a publication process proceeding from DITA source code to web presentation -- processed with such as an XML Stylesheet Language Transformation (XSLT). Conversely, if to transform a Wiki markup into any of the more semantically specialized DITA markup elements -- as in a process proceeding from Web-based Wiki to DITA source code, in any manner of a converse directionality juxtaposed to a DITA-to-Web process -- in order to transform Wiki content to DITA content, it may be feasible to begin with a semantically specialized Wiki markup, such as may be available of the Semantic MediaWiki [Help:Editing - Semantic MediaWiki]. Of course -- as like in order for Wiki editors to become editorially familiar with a semantically specialized Wiki markup language -- there may be an additional burden for documentation, if not training, in applications of a semantic wiki markup.

If it may be feasible to develop a "Round trip" DITA-to-Wiki publication model, clearly there are some "Existing works," such as may be adapted to lend, functionally, to a DITA-to-Web and a Wiki-to-DITA publication process – a DITA-to-Wiki process being functionally subsumed of a DITA-to-Web process, in an application of XML stylesheets for transforming DITA markup into a semantic Wiki markup, and a procedural system for publishing the generated Wiki markup, juxtaposed to any more media-centric (HTML, PDF, EPUB) DITA-to-Web publication model.

(Ed. Note: The following section of this article's text was originally edited with the Blogger web-based editor, in the Firefox web browser -- in Firefox' distribution on the Android platform. Presently returning to the desktop web browser, the author of this article will endeavor to study the availability of Wordpress mobile apps, as perhaps Wordpress may be more well supported than Blogger's Google-based blogging experience, on Anrdoid)

DITA Markup - Presence in Free/Open Source Software Projects

  • DITA Open Toolkit
  •  …

The author of this simple blog article, presently, will return to a study of software development tools

Sunday, October 11, 2015

Installing Debian 8.2 as a VirtualBox Virtual Guest in FreeBSD 10.2-STABLE

Synopsis: In order to run the Mendeley and Evernote desktop applications on my FreeBSD laptop -- short of endeavoring to develop a port for each of those, as onto the Centos 6 (C6) Linux emulation layer, in FreeBSD -- previously, I'd installed Microsoft Windows 7 into a VirtualBox virtual guest machine, from a Microsoft DreamSpark installer disk. In a sense, it's "Worked," so far -- as after completing all of the OS installation process, OS update process, and software installation processes onto the virtual guest machine -- "Worked," though, in such that I now have a Microsoft Windows 7 virtual guest machine available for running Microsoft Windows software via VirtualBox on my FreeBSD laptop, but considering the substantial hardware footprint of Microsoft Windows -- as in regards to Microsoft Windows' utilization of system memory and processor resources, whether or not such hardware resources are utilized via a VirtualBox virtual guest machine -- I've estimated that it may be more effective to install Mendeley and Evernote into a Linux virtual guest machine. With those applications installed into a Linux virtual guest machine, and with the Linux operating system (OS) of that virtual guest machine being tuneable as a Linux operating system, I estimate that it may be overall a more effective way to utilize the Mendeley and Evernote desktop applications ultimately on my FreeBSD laptop -- more effective, juxtaposed to the Mendeley and Evernote desktop applications being installed to the, may one say, the more indulgently designed operating system that is Microsoft Windows 7.

Thus, I've downloaded the Debian net installer CD -- using the BitTorrent P2P distribution for downloading the Debian net installer CD, then applying the ctorrent command-line BitTorrent application on my FreeBSD laptop. After some small effort for resolving an initial issue at installation time, I've now installed Debian into a VirtualBox virtual guest machine, on my FreeBSD laptop. Of course, this being a learning experience -- and although the style in which I write about this learning experience may not seem like "Normal English" to some readers' estimations -- I've considered that it may be useful to record some of my own technical observations from this learning experience, specifically of installing Debian 8.2 from the amd64 netinst CD.

Firstly, as in order to so much complete the installation, I've configured the VirtualBox virutal guest machine such that the virtual guest machine would utilize the VirtualBox ICH9 CPU emulation -- juxtaposed to the VirtualBox PIIX3 CPU emulation. When the virtual guest machine was configured, originally, to use the PIIX3 CPU emulation, the installation would "Freeze", reproducibly -- furthermore, always "Freezing" at a specific time, such as when installing the 'passwd' utility during the Debian installer process. My not being immediately predisposed to bug track that specific issue, I've sought a workaround, and have found a workaround of the issue.

Simply, in the graphical configuration panel for the VirtualBox virtual guest machine for the Debian installation -- specifically, in the virtual guest machine 'System'  configuration panel, 'Motherboard' configuration tab -- I've selected the ICH9 Chipset emulation instead of the PIIX3 Chipset emulation. Once making that single change to the configuration for the virtual guest machine, I was able to complete the Debian installation. (Ed. Note: Of course, this configuration change could also be made in applying the 'vboxmanage' shell command,  as may be installed with the VirtualBox OSE port on FreeBSD hosts. The VirtualBox manual describes the 'vboxmanage' shell command, at depth)

In order to create something of a minimalist desktop, in the Debian virtual guest machine, I'd selected the XFCE desktop at installation time. Furthermore, I'd deselected the print server installation task, at that time -- thus limiting the amount of software that the installer would install in the virtual guest machine, before "First boot".

Of course, the Debian virtual guest machine will be applied, on my FreeBSD laptop, not as if for  creating a stand-alone virtual desktop environment of the Debian installation. Rather, ,the Debian virtual guest machine will be applied as to provide some desktop application services, such that may then be presented on the FreeBSD desktop via the VirtualBox "Seamless" display integration -- such that it will then be possible to use the Evernote and Mendeley desktop applications, without those applications being installed immediately to the FreeBSD laptop's root filesystem.

At "First boot," with the newly created Debian virtual guest machine, I installed the Debian package virtualbox-guest-dkms. To install the Debian package, I used aptitude package manager application on Debian. The selection is illustrated on an XFCE desktop, in the following screenshot.






After installing the virtualbox-guest-dkms package, I've then rebooted the Debian installation within the virtual guest machine. Following the reboot, the Debian installation can now utilize the VirtualBox Seamless display mode. Effectively, this allows for a close visual integration of desktop applications running directly in the Debian virtual guest machine, in a manner of visual integration with the FreeBSD desktop -- at which desktop, I've been applying the Cinnamon desktop environment.

Of course, the installation is not as functionally seamless as much as it may seem visually seamless, at least by the time of "Second boot". At time of "Second boot," I can't help but notice that the virtual host machine's mouse pointer is not actually producing input events to the desktop in the virtual guest machine. The host machine's mouse pointer then appearing to make a manner of geometry events, in the virtual guest machine -- at least, from how it seems, with the desktop of  the virtual guest machine being displayed within a VirtualBox window on the virtual host machine. It may be that the virtual guest machine is not receiving any input from the host machine's mouse pointer, in the "Second boot"

At "Second boot," the desktop of the virtual guest machine  has become unresponsive to the mouse pointer of the virtual host machine -- perhaps it may be something to do with the APIC implementation in the virtual guest machine, as I've not seen any such issue with the Microsoft Windows virtual guest machine I've installed on the same host machine. In retrospect, the "Unresponsive" state of the host machine's mouse pointer may have actually preceded the switch into Seamless mode in the display of the virtual guest machine. Perhaps it may be "Cleared up" with a simple reconfiguration of the respective VirtualBox virtual guest machine.






After shutting down the virtual guest machine in its minimal "Second boot" configuration, I've now reconfigured the virtual guest machine -- as such, within the host operating system -- such as to apply the original PIIX3 Chipset emulation within the virtual guest machine. Subsequently, I've booted to "Third boot." In a simple commentary, it may seem that the ICH9 Chipset emulation was sufficient for application at installation time, but that it's not working out as well, for application at normal desktop runtime. With the PIIX3 Chipset emulation again selected, then at "Third boot" of the virtual guest machine, again I'm able to use the host mouse pointer within the virtual guest machine.

The screenshot, above, illustrates the VirtualBox seamless desktop integration, with Debian 8.2 running  in a VirtualBox virtual guest machine, and the VirtualBox virtualization services then running on a FreeBSD host. On the FreeBSD host, I'm applying the Cinnamon desktop environment. In the virtual guest machine -- presently -- I'm applying the LXDE desktop environment. Though I'm considering to deactivate  the desktop environment, entirely -- albeit, then to a loss of "Window switching" behaviors in the virtual guest machine -- personally, I think LXDE is a nice "Starting point" for interacting with the virtual guest machine via a desktop/menu graphical interface.

As illustrated in the previous screenshot, when the VirtualBox virtual guest machine's display window is active on the virtual host desktop environment, and the virtual guest machine display window is configured for VirtualBox Seamless display mode,  then the host machine displays the LXDE desktop environment effectively as a layer on top of the host machine's desktop.  Visually, the effect is as if the Debian LXDE installation was running immediately within the Cinnamon desktop on FreeBSD. (Ed. note: Effectively, that is a characteristic of the functional configuration, moreover, with the LXDE desktop running within VirtualBox -- the VirtualBox virutalization services then providing a manner of a "Middle services layer" in  running the Debian virtual guest machine, and VirtualBox running within a desktop on a FreeBSD host.)

There are a number of optimizations that may serve to produce an optimally running virtual desktop environment, of a Linux installation in VirtualBox -- for instance, to adjust the "clock" timer in the Linux kernel configuration to a value that may be more optimal than the default value, such as for running a Linux desktop within a VirtualBox virtual guest machine. Furthermore, it may be advisable to disable the screensaver in the virtual guest machine. To any further detail, such optimizations will be left as an exercise for another article.

By time of "Fourth boot," hopefully my own simple CorvidCube will have the Evernote and Mendeley desktop applications installed. Presently, perhaps it's an -- albeit wordy -- "Howto" towards a configuration of a sort of meta-development environment on a desktop PC.

Ed. Note: As it turns out, the Evernote desktop application is not available for Linux platforms. Mendeley is, though [Download Mendeley Desktop for Linux]. Bibliography, and so on....