A brief history of recent decisions, and their consequences:
But first for added context, here is a recap of some of what I'm after:
I've tried to blur the line between "the code of the system" (interpreter), and code running within the system (interpreted code). Both are made up of functions embedded within the same object-model, and everything operates by inspecting & modifying objects within that same model. Functions are either "native" (pure JavaScript), or are composed of (AST) objects that are interpreted by the system, but either kind should be callable from anywhere. Only the interpreter NEED to be "native" (otherwise the system cannot run). However, I will make an objects-to-native compiler so that EVERYTHING (except a few small fundamental operations) can be coded in the language of the system, thus making it self-(re)defining.
The system is implemented in Continuation Passing Style (CPS). Two main reasons is that this makes it much more possible to serialize (and save/restore) the state of a running system at any point, and so that operations that affect flow (e.g. "return") can be implemented as regular functions without any special support needed by the system.
1. There came a mismatch between how functions pass & return values. The "eval" function takes the code to be evaluated, and the context (lexical scope) in which to evaluate it (and similarly for helper functions, e.g. "lookup" which fetches a variable by name from the context). But since non-native code does not have direct access to the code-being-eval'ed and the context-object, I've changed things so that native (JavaScript) functions must always take the code-to-eval and the context-object as arguments. The "arguments" that are "passed" to the function (from the perspective of interpreted code) are actually added as properties of the context-object (nothing new here), so native functions access these directly as properties of the context-object, while still having direct access to what is being eval'ed and the context in which it is happening (e.g. so that new functionality can be added or modified).
2. The change from (1) would make it impossible to call arbitrary native-functions outside of the system. The change is to qualify compatible functions by wrapping them like object-functions. Object-functions are objects with properties specifying the arguments, whether it's a "syntax" (macro) function, the lexical scope ("scope"), and the "body" of the function. If the body is a native function, then it is invoked as stated above (1). If a native function is called directly, then the arguments are passed directly (rather than being nested inside a context-object), and are invoked in direct-style (rather than CPS -- see (6)).
3. Object-functions contain their own lexical "scope" object, effectively acting as a closure with the enclosed-scope being directly accessible as the "scope" property on the function-object. Since the scope of native javascript functions is inaccessible from within JavaScript (and because it "breaks" if you try to modify a function), I'm implementing the "native" operations as wrapped objects (see (2) above) and referencing neighboring functions via this "scope" property (which requires the setup of the system to carefully set that property to the object that contains everything). This allows even the native functionality to be modified directly without breaking contexts. (pure native functions are implemented in a way that they don't rely on any enclosing scope).
4. I chose not to have a "symbol" type, and instead use strings to look up variables. I did this because I did NOT want to wrap each value in an object with separate semantics for marking the type, and instead rely on the underlying JS type-system; and there is no matching underlying "symbol" type in JavaScript. The "uh-oh!" of this is that there is now no way to distinguish between looking up a variable and using a string value. For example, ("foo", "x") might mean to pass the value of variable x into function foo, or it might mean to pass the STRING "x" into foo. My plan for now is to use an object-wrapper to distinguish the two. I'm leaning toward wrapping symbols, so that strings can keep their simple implementation. Thus, ("foo", ("symbol", "x")). There is more to explore. Also, forget how "ugly" it might look, because remember that the goal is to map everything through a UI with the most ideal visual representation, which might not always be a 1:1 rendering of every property as it is literally laid out.
5. The presence of a "syntax" property on a (non pure-native) function tells the eval'er to pass the arguments as-is instead of evaluating them. Thus (foo (+ 1 2) (* 3 4)) will pass the actual (+) and (*) expressions into foo if foo is a "syntax" function; otherwise it will pass 3 and 12. This applies the same to object-wrapped native functions (maybe I need to name these better). These are LIKE "macro" functions in LISP languages, except that they can be called at any time rather than just being evaluated first and then removed. My reason for doing this is because the "macro" idea assumes a compilation-phase, whereas my system does NOT -- there is no "source" code (outside of bootstrapping the system); everything just IS. Perhaps later I will find something useful to do with the VALUE (rather than just the presence) of the "syntax" property.
6. The choice to implement the system in CPS (see top) means that native functions must also be (manually) implemented in CPS, and thus take a "callback" function as an argument. (non-wrapped native functions are still called in direct-style -- see (2)). I have yet to decide how to expose this callback to native functions, or whether I should implement
call-cc. I might also consider augmenting the callback argument to include separate "success" and "error" callbacks, or perhaps even arbitrarily-labeled callbacks (e.g. for exception-handling).