30 October
2010

An empty machine beckons ...

Omni ignotum pro magnifico in the present like the past

So here's an empty laptop once again. Every time I've begun a port, I've always faced a completely empty machine and sought to fill it.

With 386BSD, it started on a "lunch pail" 16 Mhz 32-bit first generation 386 with 2MB of memory and a 40MB 5.25" hard disk with 1.2MB floppy - where 386BSD got its name. This time the empty machine is a bit bigger - claiming 3,500 Mhz of 64-bit 8th generation with 512MB of memory and a 250,000MB 2.5" hard disk with a 4,700MB optical drive.

I'd like to say that since 386BSD that OS+distribution has gotten 200x better. That gets to the first question I'm seeking to answer - what is necessary for an OS+distribution to improve beyond current limits?

How we answered this before was to supply a PC user base with a derivative of VAX/BSD that was scalable to PC developers (e.g. we knew hardware would rapidly develop but knew software would lag hardware). The part that was "old" BSD was to us just a starting point. The part that mattered most was where one could go with it. Modular extensibility to compartmentalize software development and adaptive administration to lessen the burden of a more complex personal OS environment turned out to be what we found as the then limits we were exploring.

To begin, one needs to understand the state of the art to understand where the limits are. So, putting other 64 bit OS's onto the same hardware, we get a understanding of the limits of usability that a user gets when they attempt to use them. In this way we can compare many and get an idea of where they go awry in joint - the bigger picture of what an OS needs to do to exceed the current limits.

And with this we start with the installation of a system adjacent to our OS in development, on the same hardware and under the same situation IDENTICALLY, down to erasing("zeroing") the drive before installing.

When we tested 386BSD installation, we had found how incredibly small things made installation fragile/breakable.
Back then, the context of definition of the "PC platform" was not very refined - you couldn't even depend on memory layout nor BIOS compatibilty. There were some hardware devices that would jam the bus on reference, and some issues where spontaneous resets occured for unknown reasons (later found to be chip bugs of certain steppings).

So in starting to install OS's, it wasn't surprising to find that it didn't work right the first (dozen) times. That it required finding "magic combination's" of hardware configuration to complete. In one case it required disabling certain BIOS and OS features involving power (ACPI and processor fan), as well as disabling network updates of software during installation. Even with these, other surprises were present.

Partitioning software embedded in the install process "core dumped", jamming the installer (later traced to the way the drive had been cleaned from another OS having been left with an partition table that provoked an error). During the analysis of this, it was found that there were unrecoverable situations with hard disk formatting that required wiping the disk to recover, and even with that certain ways of partitioning the drive would create peculiar errors. At one point, a partially wiped drive core dumped the installer as it mounted a filesystem that was severely damaged and attempted to use it! Another case was where an operation for formatting appeared to be rerun, apparently because a script was run that wasn't waited for properly, and due to media delays was postponed and thus thought to have failed thus was reissued on timeout.

Then as now, the semantics of the installation environment are the culprit of errors. Clearly one area of improvement that's obvious is that we need to save accross the network all installation attempts and the differences in environment of what is encountered and how the installation process coped with it. Perhaps a way of bettering things with this knowledge would be to refine a decision tree of scripts on a network accessible site to try on encountering novel issues automatically.

So it begins ...


Posted by william at 17:55 | Comments (0) | Trackbacks (0)
<< Comment on "Porting UNIX to the 386: The Initial Root Filesystem" | Main |
Comments
Kernel improvements - where to do them good and bad

One note is that in linux, one can assign processes to groups for the completely fair scheduler, allowing for a crapload of compilations to be given fair cpu time to something consumptive, like a video playing, instead of giving each build equal priority with the video.

http://www.webupd8.org/2010/11/alternative-to-200-lines-kernel-patch.html

Posted by: Ben at November 28,2010 18:46
Re: Kernel improvements - where to do them good and bad

In other words, Linux is at the point where changes to the kernel are becoming suboptimal - more can be done by external to the kernel changes, such that the kernel can be reduced in size/complexity. Rather than add 200 lines of kernel code in this case, pre configuring system operation in a user level shell script suffices.

So perhaps preprocessing on boot up for architectural artifacts of a kernel might allow for less code doing more. /sbin/init kernel.rc that possibly builds the architectural layers of the kernel and data structure relationships?

Posted by: William Jolitz at November 28,2010 19:05
Getting up and running - bumps and bruises

In getting an OS to work on the empty machine, I've encountered more than a few surprises:

a) It took me a few days and about five tries to install, screwing up partitioning of the drive using Grub - it would seem that installing on a empty machine (no os) isn't tested other than in the case of claiming the entire disk. The semantics of the process didn't match the operations of the programs, nor was it clear the net effect of what was meant until after the end of the installation attempt. Nor was there a way to assess why the installation failed or how to recover - one had to scratch everything and try again. Nor was resource utilization apparent of the choices set in stone at the beginning.

b) presumption of operation by inference of choice of action seems to be the entire way a new user "learns" to setup the system. Thus if you know it you don't have to learn it, but if you don't theres a long chase around the barn.

c) adding software sometimes resulted in application aborts leaving inconsistent dregs - mostly installed programs missing menu entries in one case (thunderbird - easily fixed).

d) default browser did not function as expected, necessitating adding a different browser. A fail for initial user experience in attempting to confirm installation.

e) it was hard to tell when the driver and configuration for the wireless was correct, and remembering to initiate with the wireless button and coordinate the disable of conflicting adapters confused the case when it was correctly done, obscuring the fact that it had been done right for weeks.

f) using web information experience to solve audio problems was inconclusive as well - it was hard to tell what caused the fix to work for the prior persons experience - once they got it running they didn't want to document the steps further. So everything has an inconclusive feel to it, so these items can't be incorporated into a regression test of some kind.Thus the software doesn't get better.

Posted by: William Jolitz at November 28,2010 19:35
Trackbacks
Please send trackback to:http://blog.jolix.com/5/tbping
There are no trackbacks.