Thursday, December 22, 2005

I just have a quick addition to my previous post "Lisp from the ground up".

I think one of the defining features of a Lisp processor is not so much consing or any specific command, but rather the general ability to perform operations on the program code. If a strength of Lisp is the ability to operate on the parse tree of the program, then there will need to be instructions that can change the underlying code of a program.

From my MIPS microprocessor class, I recall that there are two sections of memory for a processor: the code memory and the data memory. The code memory contains instructions (pointed to by an instruction pointer), and these instructions operate on the data and the registers. For example, you can read a certain piece or data, or you can make the instruction pointer register jump to a new address. However, there were no facilities (that I recall) to change the instruction code itself. I would consider this to be a central ability of a "Lisp processor".

Further addtion:
To put into perspective how powerful this could be, let's make an analogy to DNA. Right now, any genetic mutations we experience are a result of random, low-probability events. Imagine what might happen if (at will) we could reprogram our DNA. It could be very dangerous---we could wind up changing an individual into a new species, or worse, killing ourselves in the process. Or, we could just change our hair color. Regardless, we would have more power to make changes. In a silly way, I imagine being more like a shapeshifter/changeling from Star Trek. If we do this with processors, we can probably pack a lot more punch per transistor.

Tuesday, December 20, 2005

Lisp from the ground up

I sometimes try to think of how a Lisp system built from the ground up might look. It's a interesting exercise, because it mostly is a matter of scope. Plus, when you compare it to "modern" computer systems, there's a big leap between low-level systems and the high level we interact with. People expect graphical user interfaces, soundcards that work, mice, hard disk interactions, etc.


We could start at the lowest possible level: chip architecture. Can a chip be designed so that it's optimized for Lisp from the get-go? Would it look any different than the chips we have now? I'm talking about things below assembly language here--logic gates put into order to build an arithmetic logic unit (ALU) are one of the basic components of a processor. Can we/have we built similar modules that are meant for basic Lisp operations, like consing? We could have new instructions that don't make much sense for someone writing a C compiler, but would be heavily used for a Lisp compiler. C compilers were (presumably) designed around available assembly capabilities, which was in turn designed around machine language features and/or processor instruction sets. But what were processor instruction sets based on? Perhaps basic operations we were familiar with, like adding and subtracting. Perhaps also features requested from higher level languages, like C. Something akin to "It'd be great if we could have an instruction that did X". I'd suppose that majority of instructions nowadays are of the latter type.

But what if Lisp were the primary language, and chip designers heard "It'd be great if we could have something that consed for us." Would we have different instructions sets? My intuition tells me yes.

Now, the chip would probably still interact with peripheral chips (in hard drives, video cards, etc.) that are customized for that purpose, and which may be programmed in C or assembly, but because all their interactions are (usually) on a low level (bits and flags), there should be no issue with those interactions.

To do something like this would probably require starting with FPGAs and some Verilog/VHDL, and result in a custom-designed chip.

Next, we could define the ground as "any existing processor". That's fine and dandy, except there's a large number of processors and they all do things differently, optimize differently, and have unique instruction sets. The fastest way to do task X on a MIPS instruction set would be different from the fastest way to do it for an x86-based set. So, a Lisp compiler, to be truly effective, would need to be customized for each processor--indeed Intel releases their C compilers which happen to produce the most optimized code for an Intel chip. Try it on a different chip or use a different compiler and it's not so fast or compact. That sounds like a lot of work to maintain; however, it is do-able. Note that Movitz plans to do exactly this, but only for an x86 architecture.

Something I imagine trying for fun would be to make a Lisp system that runs on a smaller chip, like a PIC microcontroller, or an ARM-based chip. PIC would be nice for hobbyists and electronics programmers, but ARM would be nice for PDAs. Maybe it'd be best to identify their common instructions and hit them both. I, for one, would love to convert my Zaurus to a Lisp-based OS

After this, it gets fuzzy for me...I imagine a Lisp kernel, a Lisp OS, with a Lisp compiler and Lisp applications. But somewhere in there, we put in a GUI, making the leap to user-friendliness. How to do that while still maintaining Lisp-ishness, I'm still figuring it out. Maybe something akin to the way GNU/Linux distributions have a GUI yet you can still access the terminal (which I would equate to the Lisp toplevel). We'd also have many standard protocols in there, like USB and TCP/IP, communicating just as effectively as they do now with their C implementations.

I'm also still thinking about the namespaces and toplevel issues. As I'm still learning Lisp, I may not know about it, but there would need to be some way to keep everything from being exposed at the toplevel. Maybe something akin to a directory structure (or nested namespaces), much like how in the terminal, you have to be in a particular directory to run a program. Without this, the function and variable names would become unnecessarily long.

More thinking to go, feel free to add comments if you have thoughts of your own...

Thursday, December 15, 2005

I've decided to give the Dvorak keyboard layout a try. This is my first post fully typed with Dvorak, so it probably won't be a long one. :) Mostly, I got the first round of practice with http://gigliwood.com/abcd/abcd.html

Also, I can't wait for a decent e-book reader to come out, using e-ink technology. I'd make one myself, but I don't have the finances for it. Donation, anyone?

Monday, December 12, 2005

Web 3.0 beta

In my last post, I mentioned how noticing patterns can give you an edge and tell you a thing or two about the future. I'll make use of that today in predicting Web 3.0. But this isn't just any Web 3.0, this is Web 3.0 beta.

I've seen a few other people already referring to Web 3.0, but they're missing the "beta" part. Web 2.0 will come and go, and people will sigh as they say "still no flying cars". There was much hope and promise for Web 2.0, but it never materialized. Why?

And thus, someone from O'Reilly or Google or even a lowly unknown blogger will utter "Web 3.0 beta," and everyone will see the error of their ways. Of course web 2.0 didn't work! There was never any beta testing. Google beta tests for so long, we forget it's beta, they are easily forgiven for their mistakes, and they have longer to work out the kinks in the system. Alternatively, the name could signify that Google IS the internet, and they can brand it however they want. And since they know it's not a completed internet (the world's information isn't fully orgainzed yet), it will be released as Web 3.0 beta, and it will stay there indefinitely.

What can we look forward to with Web 3.0 beta?

In addition to Google Talk, there'll be Google Listen, which will listen to whatever you have to say and respond (verbally) insightfully, throwing in barely perceptible sales pitches tailored to your discussion.

Not only will there be Froogle, but there will be GoogleBucks, which will replace US and other currencies as we know it. It will bring forth a world-wide currency system, so your hard earned dollar here buys the same cup of coffe in Paris as you'd get in Beijing.

There will be newer and better ways of writing software. For the first time, you'll have a web-centric programming language with the power of Lisp, a fully-functional visual & verbal development environment, a thorough set of libraries that are intuitive and easy to use, it will be fast (to write and to run), and it will be a natural fit for both web-based and single computer programming.

Ladies and gentlemen, these are just a few of the features we can look forward to in Web 3.0 beta. Whether that's 5 years away or 100 years away, who knows? But that's the beauty of beta...

Wednesday, December 07, 2005

Probability

I recently read the free book "God's Debris" by Scott Adams, and I must admit, it presented some interesting viewpoints. As Scott himself admits, there are holes to it, but some of the basic axioms presented hold some truth.

One that I found intriguing was the notion that probability ruled all. That if you flip a coin long enough, it WILL land heads up about 50% of the time. No one can change that, no one can beat that, it is immutable.

Suppose someone wanted to build a time machine, not so much for traveling through time, but rather to simply see into the future. Could it be done?

I think the more important question is, would it need to be done? Since we don't have the tools to predict the future, probability is our closest friend. We know that certain events can occur (on the average) with certain regularity. For example, 99.9% of the time that I tie my left shoelaces, I'll also tie my right shoelaces. Sometimes I might not tie the right shoelaces. If someone were to bet on my shoelaces, there is a risk that I could tie only the left shoelace.

Is it an acceptable risk to say that I will tie my right shoelace if I tie my left? I'd say yes. You won't be wrong often.

If we notice patterns, and we identify probabilities for future events, we can essentially see into the future. Not a specific future, but an averaged future.

Pattern recognition is probably one of the most valuable skills someone can have. If the right elements are being observed, and a pattern can be conjured up, the probabilities associated with that pattern can provide an edge that no one else possesses.

We can see this all around us. Patent lawyers make a living off submarine patents (as in, patent something, wait for someone to implement/invent it, then force them to licence your patent if they want to take it to market: check out the Blackberry or Eolas fiasco for examples). Casinos thrive off of the edge they create--even though their customers are well aware of the tilted scales. They may not know the outcome of each poker hand or pull of a slot machine, but they know that overall, they're going to win. Venture capitalists, too, know that the "next big thing" has to come from somewhere, and that losing 10 and winning 1 happens sometimes. But when that 1 wins big, that takes care of the 10 losses.

Losses happen. We can't change that. All we can do is find an edge and ride it long enough for probability to play out. That means (1) you must be able to identify patterns/edges and (2) they must exist in a long enough time frame.

So do we need a time machine that shows the future? I'd say no. All we need is a machine that can make good enough observations of the past, find good enough patterns, and produce probabilities for future events. Factor in learning to identify changing trends, and you're close enough to a time machine--who needs the real thing?

Monday, December 05, 2005

I got back from my last company visit last week, and boy oh boy, it's cold in Michigan. I seem to have brought back a sore throat with me, but I can also boast a 3-for-3 success rate in that if I have a site visit with a company, they end up wanting to hire me.

In other news, I was reading the Dilbert blog tonight, and there was a post on how to tell if a movie is good (dilbertblog.typepad.com). Scott Adams uses the "rave reviews" source as a determining factor, but I've found my own way to tell if a movie is bad. Toothy-smile count.

If you look at the box cover, the number of people with big smiles is directly proportional to how bad the movie is. I think it has something to do with the basis of the movie being one of a "feel good" nature. "Little Women" from December 1994 is a perfect example of this: there are 5 beaming faces on the front cover. If we look instead at the movie "First Knight" or "Pirates of the Caribbean: The Curse of the Black Pearl", how many bright smiles do you see looking at you? Precisely zero.

I'm sure that exceptions about, and that I haven't done enough research to broadly justify my claims, but given the five minutes I spent looking into it, I believe it warrants consideration as anecdotal evidence.

And that's good enough for me. :|

(By the way, you can tell this was a good blog post, because the "smiley" at the end wasn't smiling.)