I want to make a few observations about this, and why I found it seemingly less "hacky" than other such attempts:
- As I said in an earlier comment, "var" is just a typedef'd "void *". The downside is that libCello code is essentially untyped, but the upsides are that the C preprocessor is now enough to do the processing needed for the rest of the macros in the language, and that you can still mix regular typed C with libCello code for a "best of both worlds" mix.
- Looks pretty, right? What you're responding to is not just the nice non-capitalized macros and the $ keyword, but the syntax highlighting in his examples. Fire up a text editor and write you some libCello code without this highlighting and it probably won't feel as nice.
I'm extremely interested in the idea of taking the syntax highlighting, formatting, and code completion OUT of these specialized IDEs and plugins and into some kind of standard "bidirectional channel" between a language processor or compiler, its macro system, and the user's editor of choice.
We should be able to make entire DSLs and specialized syntaxes that not only compile, but are able to provide rich information to the surrounding development environment in a standardized way. I'm not alone on this. F#'s Type Providers do exactly that. But imagine being able to control not only the "intellisense", but also the syntax highlighting, argument/return tooltips, documentation, preferred formatting, snippets, etc.
And by "surrounding development environment" I mean everything from the command line to vim and emacs all the way to Sublime Text, Eclipse, and Visual Studio. Even github! Why do you have to register a new syntax highlighter on github for a language and hope they turn it on?
Thanks! I knew I had read something that was along these lines and that was it. I've been thinking about this problem for years. Perhaps it's time I worked on it myself.
Turning up blanks here trying to find any kind of project site for it, looks to be internal for now.
I've long wanted to see something like an Eclipse daemon offering tool support to external editors etc. Ensime [1] does something along these lines, getting Scala' compiler functionality into Emacs using the Swank protocol from Slime [2].
> I'm extremely interested in the idea of taking the syntax highlighting, formatting, and code completion OUT of these specialized IDEs and plugins and into some kind of standard "bidirectional channel" between a language processor or compiler, its macro system, and the user's editor of choice.
Well maybe if we stopped using ASCII files as our source code and used a richer type of document we could start to make some improvements!
What kind of rich document? Well, what about HTML? Did you know you can even embed interactive elements in such documents? What if we started embedding our source code and editors in our documents instead of the other way around? The whole model is inverted, but mainly for historical reasons, because you know, in 1979 all we had was VT100 to interface with our machines... it's like we never moved on... why is my "code" a splattering of ASCII files in a directory? A package definition, tests, dotfiles for this and for that... ship the editor, the tests, the docs, the examples, and the source ALL as one document!
That may have been the case in 1979, but in 1980, we got Smalltalk. This environment was (almost) completely without the concept of a file. Code was stored in a persistent 'image', like everything else, and was accessible through normal object based data structures. This, and the fact that all of the source code for the system was immediately available, made it extremely easy to build and extend language level tools like browsers and debuggers.
I would say the downside of this 'rich' environment is that your tools tend to become language dependent. Early Smalltalk code versioning tools were fairly easy to build (I'm guessing), but would definitely be Smalltalk centred. The unfortunate advantage of an ASCII text file is that it very difficult to have more than one interpretation of it's non-existent meta-data.
Thank you for mentioning Smalltalk! My comment was heavily inspired by that "computing environment".
If you look at JavaScript and the web browser... well, it's more like a Smalltalk system than a UNIX system! There are no files! It has a weird little dynamic multi-paradigm language! The web browser become the spiritual offspring of Smalltalk as soon as Brendan tacked on JS. :)
Look how WEIRD package management gets for JS when you try and make it supported by UNIX and the filesystem!
As for tools becoming language dependent... what about sourceMaps? Look at how CoffeeScript integrates with our workflow tools these days!
The biggest problem with Smalltalk was that it was made for Personal Computing, and ALWAYS had issues running other people's code or integrating well with collaborators.
The web browser and JavaScript solve a lot of these issues with the whole sandboxed code environment...
The web has NOTHING to do with files, so it seems like the perfect place to just forget about them and move forward...
Relays and switches only got us so far...
Machine code only got us so far...
UNIX only got us so far...
Why have we lost the courage to continue looking for better solutions and being brave enough to throw away certain concepts?
I mean, look at mobile and touch computing! iOS and Android are completely abandoning the concept of files!
Why? Because they don't really have to be there... Again, look at Smalltalk... and I know, it "failed", but do you really think it failed because it didn't have files? Is that the one reason? Is it even a reason at all??
The web has NOTHING to do with files, so it seems like the perfect place to just forget about them and move forward...
What? I really don't understand this comment. The web is almost totally file-based. Visiting any website is just downloading and executing files. Including Javascript and CSS files is really just like a copy-paste operation--it just spews everything into the global namespace, which is that of the HTML file that you're currently viewing.
They're representations of resources, not files. Some of them are stored as files on the server, but that's purely incidental and irrelevant to the browser; it treats them just the same as it treats the runtime generated representations, such as this HTML we're viewing.
Files are representations of resources as well. Some of these are mapped to data stored on disk blocks, but others aren't. Some file systems are not disk abstractions at all. While I agree that calling HTTP resources "files" is not entirely correct, they're presented in a very file system-like hierarchy, and in most cases I'd say that they are not meaningfully distinct from files in a file system.
I definitely find "The web has NOTHING to do with files" less agreeable than what kyllo is saying.
> You can move them around, read to them, write to them, execute them...
That functionality depends entirely on the file system and the file in question, as much as the operations available on an HTTP resource depends on the verbs that the resource implements and how it implements them.
Also, a file system will typically not execute your files. Nothing stops you from executing an HTTP resource in the same manner. Say,
This HTML we're viewing may be runtime-generated on the server, but it's still saved on my machine as one or more files in a temporary folder and then opened by my browser application, is it not? A web browser is really just a fancy, scriptable file viewer.
And that's for HTTP--not to mention the other major protocol FTP, the name of which is self-explanatory: File Transfer Protocol.
Define "file". Read off from the definition of file whether the WWW is just a set of "files".
If you use the UNIX-like definition of file (to say nothing of the Plan 9 definition), which can include dynamically-generated streams such as /dev/random, then sure, the whole web is a set of files. I assume Plan 9 will even serve web sites out of its "filesystem" with the correct poking.
If you use the DOS-like definition of a static file (ignoring CON and a couple of other special cases), then, no, the web is not merely a set of fancy static files on disk. Many, many, many things never have a rest representation on disk as a simple file, even if one could theoretically be manifested at a point in time for some particular web page.
I sort of feel you're trying to conflate these two distinct definitions, and using the fact that the web is a set of stream-files to assert that the web is nothing but a set of disk-files, but that's equivocation.
This HTML we're viewing may be runtime-generated on the server, but it's still saved on my machine as one or more files in a temporary folder and then opened by my browser application, is it not?
No, it's kept as an in-memory structure. Certain representations are serialized and cached to files, yes, but that's just a optimization to reduce network requests; you can disable it without affecting the browser.
In particular, "grep -r kyllo ~/.cache/mozilla/firefox/" returns zero results, despite having this post open on my browser.
A web browser is really just a fancy, scriptable file viewer.
I disagree; it's an hypertext engine, with renderers for multiple media types. That hypertext can be loaded and/or cached from files, but that's an implementation detail that does not define the application.
And that's for HTTP--not to mention the other major protocol FTP, the name of which is self-explanatory: File Transfer Protocol.
One of the reasons Smalltalk failed is called Java.
Smalltalk was actually getting a foothold in the enterprise when Java happened and everyone switched to it.
Eclipse was originally a Smalltalk environment (Visual Age for Smalltalk).
Yes, files also played big a role, because it was not possible to use source control systems with Smalltalk that worked across implementations. So you were bound to a specific vendor.
And actually Sun approached Viacom to license Smalltalk at much less than what they were currently charging. Viacom wouldn't budge so Sun invested in "Oak" that they were already playing with in house.
I made the switch from Smalltalk to Java. I remember how disappointing it was to return to such a primitive language. I also used the VisualAge for Java products, which I recognized immediately as Smalltalk tools. I wasn't at all surprised to find out they were actually written in Smalltalk
While Smalltalk is awesome, do keep in mind that in 1980 it only ran on very expensive machines, and the software license was also expensive. This was still true 10 years later.
I've been thinking about a programming system that stores all the code in a database instead. Versioning would be easier that way too.
I would say the downside of this 'rich' environment is that your tools tend to become language dependent.
That does not necessarily have to be the case. Just like you have abstract collections capable of handling values of any type, I don't see why it should be impossible to have a model of versionable, editable code into which you could inject your own implementations for the particular language you're coding in right now.
I realized the other day how close persistent collections and tools like Git actually are. And given how most modern languages have programs that are essentially trees or groves, and given that said persistent collections tend to be trees...? Well, what do you think?
All? I'm sure Alan Kay would disagree. The problem with OpenDoc is that it emerged at an unfortunate time and asked you to work with vastly inferior languages than what you'd actually need to make this work without getting insane in the process.
Sure, but compare "HTML + Web Browser + HTTP" to "ASCII + VT100"...
Right now the terminal is our window and it is nothing but colored ASCII. You can create something interactive and "graphical" like vim or emacs inside of it, but then you can't ship that complete dev environment very easily... it has to be rebuilt from scratch by the individual programmer.
The web makes it very easy to ship interactive, dynamic documents and environments around.
I understand what you're saying, but the problem is, I don't want to have to use the preferred editor of the person that happens to be maintaining some random projects I'm editing. I like my VIM and my grep, and my colleague likes her Eclipse and her InstaSearch.
Now, that doesn't mean we should eschew all metadata, but I rather see some structured data formats that can be interpreted by the editor and fitted to the user, instead of the other way around.
In fact, I'd rather see the web itself move more in that direction. HTML/CSS may be wonderful in terms of the freedom they give to the developer, but it's no free lunch: we lost a lot of potential interoperability by going with free-form, unstructured formats instead of more content and less design driven ones.
Why can't we embed vim or emacs in a web document?
We can, that's not the issue. The problem is that if you embed an editor - any editor - into the project, you're forcing everyone to use it, while currently each developer has the freedom to choose his own.
And if don't embed the editor, then there's nothing to discuss; one can already use web based editors if one wants to, and I can use VIM ;)
Can I please point out that building graphical interfaces in ASCII is just about as bone-headed as my suggestion? :)
The whole reason I like VIM is because it barely has any graphical interface. The whole experience is extremely content focused, without a mess of buttons and toolbars obstructing the vision. I talk to it[1], it does what I tell it, and it stays out of my way.
What if <THIS SPACE LEFT BLANK> and the end-user can supply their own editor?
But then, how do we ship around the source editors meta data that is REALLY important, like, custom DSL syntax highlighting?
Do we need a universal syntax highlighting engine? What other universal engines do we need to build interfaces in to Emacs and Vim and Eclipse and XCode and Visual Studio in order to tackle some of these issues?
You just need some standard, structured format for describing the syntax of a particular language. And we already have these; we use them as source for the parsers of the languages themselves. So it's mostly a question of augmenting them to provide highlighting-specific hints to the symbols.
The question shouldn't be "Why not?" -- it's "Why?". If you are running a browser, you are already using an operating system. Why would you run another one?
Running editors (or whole operating systems, for that matter) in a browser seems like a backwards way of solving this. If you really need to ship a whole development environment including the editor, and not just the config files, project files or syntax highlighting plugins, you might as well use a virtual machine. If a programmer has no idea of how to set their editor of choice up for what they're developing, they won't be building a decent development environment in Javascript.
My point is that the web browser IS a virtual machine! It also happens to be the VM with the largest install base. It is a lot easier to ship something that runs on this VM than to convince people to download yet another VM.
And this isn't about developers knowing how to set up their editors. It is more about DSLs. It is more about the fact that there is more to coding than just writing source. Hell, it is about trying to come up with a way that we aren't totally beholden to source! It's just data, and frankly, not the most important data around software!
Sometime other paradigms, and I don't just mean other languages, but things like spreadsheets, graphs, and visual languages can be great DSLs that properly model and convey information MUCH better than source... but you can't really embed those in eclipse or vim, can you?
Think about if you released a project on GitHub and it didn't have tests, documentation, or example use. Would anyone use it? What if it was competing with other projects that DID have tests, docs, and examples? What is that code with tests was slightly worse? Which project are other developers going to want to interact with and use?
We live in a world were our programs, our source, and our peers live in a distributed ecosystem and as far as I can tell our tools and operating systems are starting to get in the way.
Have you seen how awfully messy web development becomes when you try to bootstrap from a filesystem? Things like JSFiddle are MUCH more elegant, but are, how shall I put it... missing some key tools for a new ecosystem of distributed computing. ;)
So if I want to edit some open source project, I would be forced to dick around with vim if the person who made the project wrote it in that? Are you trying to go forwards or backwards?
If you're trying to write code, what to HTTP bring to the game? I think in this case, HTTP would be equivalent to a really, really, really long cord for your keyboard to wherever your code is.
Also, if you're using characters or glyphs to write code. It's always ASCII (or your character set of choice). It doesn't matter if it's in the browser or in a terminal. HTML presenting the code that your writing is just cruft.
Right now I'm looking at my terminal. It has a bunch of tabs. Each one of those tabs is a little "runtime". However, it is a runtime made up of ascii characters. That's what runs inside of it. Little letters, sometimes colored, sometimes made to look like borders for windows.
And I've got this other thing called my web browser. It also has a bunch of tabs, and each one of those tabs is a little "runtime" as well. This one just happens to support embedding images, video, audio, non mono-spaced fonts...
My terminal runtime communicates through the filesystem. (it's UNIX, files all the way down...)
My web browser "runtime" communicates through HTTP.
That's all I'm sayin'.
---
Also, I KNOW why UNIX is all files and I get the whole concept of piping ASCII around... yeah, it's fucking awesome! And it's why UNIX is STILL around. It'll be around forever, and man, I DO like it, but should we use it for everything? Why?
Piping ASCII around only gets you so far. :)
Why am I getting so much HN downvote love for talking about some of the downsides of the UNIX-way?
But most developers are already using runtimes that support images, videos, audio and non-monospaced fonts: they're called Qt, GTK, Cocoa, etc. So that's not really a reason to switch to the web.
Sure, but those runtimes are EXTERNAL to their editors!
BTW, are you familiar with Literate Programming by Donald Knuth?
And the reason I'm mentioning the web is because, well, it's a nearly universal runtime and the best thing we've got that might get a lot of really important and forgotten about concepts from Smalltalk back in to the mainstream of computing!
Also, are you familiar with Don Knuth's Literate Programming? Thoughts on that?
(And yes, I realize this thread is going absolutely everywhere, but I've got a lot of questions!)
Sure, but those runtimes are EXTERNAL to their editors!
No more than than the web browser vis-à-vis the kind of editors you're talking about. They're built on top of the graphical APIs, much like web apps are built on top of HTML/CSS/JS.
(Note that I'm referring to graphical editors like Eclipse or Sublime, not ncurses editors in a graphical terminal emulator)
And the reason I'm mentioning the web is because, well, it's a nearly universal runtime and the best thing we've got that might get a lot of really important and forgotten about concepts from Smalltalk back in to the mainstream of computing!
Maybe, but without a concrete example / vision, I find it hard to believe that I'd be willing to lose the comfort of my development environment for the hack-y, keyboard-hostile world of web applications.
Also, are you familiar with Don Knuth's Literate Programming? Thoughts on that?
In a very vague way; it didn't really appeal to me, sorry.
Function/method/class docstrings (sorry, my comment may have been misleading) to document the purpose/contract/API, terse inline comments for the implementation only when something is not obvious to a skilled developer. And of course carefully-chosen variable names. I'm not at all a fan of heavily commented code. It just adds more bytes to comprehend, more bytes to maintain, and worst of all a high probability than the code doesn't exactly match the documentation, which causes a significant mental load. Also sometimes leads to people showing off in their natural language descriptions of their code. Same for literate programming: I want as few characters as possible that I have to understand, with that statement appropriately qualified :)
I can't say I see your point. What does switching to HTTP bring you?
HTTP is just piping around ASCII.
I don't think you're getting down voted for talking about the downsides of UNIX. You haven't actually mentioned a downside of UNIX, you've just suggested that we should switch to some other model.
Ok, so let's say you download a GitHub project... it has docs that run in a web browser, tests that run in python, some code that actually does something... it has some dotfiles for this and for that... well none of that DOES anything. It all depends on there being certain tools on your end of things...
I'm saying that instead, we just ship all the tools along with the code... and you can do that in browserland, and that's NOT the UNIX-way at all!
...it has nothing to do with HTTP, it has to do with the web browser runtime as compared to the runtime built on top of a filesystem!
UNIX land doesn't not play well when running any arbitrary stuff that comes it's way... sure, it might not have root, but, uhm, yeah if some code wipes out my entire user account, I'll be pretty bummed out... that's not gonna happen in the browser with it's sandboxed environment that has been battle-tested by trillions of page requests!
Find me another runtime that is as many places and is readily available to run third-party, untrusted code, and I'll happily jump ship! (aint gonna happen!)
I have thought long and hard about why Smalltalk failed and have had numerous discussions with people and the general ideas are:
* poor marketing
* expensive, closed license
* not enough effort put in to tooling
But almost everyone agrees that there is a LOT that was lost when Smalltalk didn't take over the marketplace... the point is, it wasn't because they didn't go with the concept of a filesystem...
Author here. Yeah the syntax looks much better with correct highlighting. I've provided a custom definition for Notepad++ on the repo - and of course there is the definition used on the website too.
I'm also a big fan of DSLs and would love see some of the things you mention. My next (fun) big project is actually not so far from the same lines ;)
Many thanks to everyone for the kind comments! It really is encouraging. :)
If you would like to take on something that is genuinely lacking from C, try creating something equivalent to C++ templates for C. I'm not sure that it can be usefully accomplished using only the C preprocessor. The C preprocessor is extremely underpowered. So it might require its own preprocessor.
I wouldn't make them exactly like C++ templates. I'd just focus on the useful things that you can't currently do in C, like write a qsort()-equivalent that doesn't suffer the performance penalty of callbacks. You can almost get there with macros, but not quite -- it would be ridiculously ugly at best.
The idea with that was that you could provide a structured description of a mini-language and it would be sufficient to generate not only a parser, but also a syntax-highlighting IDE with smart code completion, debugging and other neat features. It looks like they're still developing it too, which is a nice surprise: back when I was still interested in it, it never seemed to have taken off.
MPS sounds more like some kind of equivalent to Xtext, i.e. for ease of producing and and hosting DSLs. Tools like Xtext and MPS actually tie you in to their host environments (Eclipse and IntelliJ respectively).
What I took GP to mean, and indeed what Yegge's Grok appears to aim for, is the need for a universal modelling framework 'standard' for languages, tools and runtimes. Some kind of equivalent to LLVM but in the 'other direction' for modelling language semantics instead of code. Once a tool (e.g. editor) implements that standard it is able to 'plug in' to any tooling that also implements the standard. Potentially eliminating huge amounts of duplicated effort for tool makers etc.
I am not sure I understand. Is there enough similarity between the semantics of languages that a one-size-fits-all solution is even remotely possible? It seems more likely that it would be something that would not-quite work for every language, and result in worse tooling for the sake of standardization.
Super stuff this, that's a very interesting approach.
I spent the better part of the last two years writing a (closed source, sorry) library that does some of this, and some other stuff besides (state machines, events, 'proper' strings with automatic garbage collection and allocation, message passing).
Maintaining static typing was a big pre-requisite for that library, without it too much of value would be lost to offset the gains. It was a very educational project for me, it definitely re-inforced the 'half of common lisp' meme.
To program a piece of software using that library no longer felt like programming in C, every now and then you'd see a bit of C shine through in the lower level code. The whole thing relied to a ridiculous degree on macro cleverness (something to be avoided, for sure) and other deep knowledge of how C works under the hood to get certain effects, and I found this part of it less than elegant (even if the results were quite a big step up from programming in C).
The main justification for doing all this was to straighten out a project that had become bogged down under increasing complexity and a failure to abstract out the common elements. Choosing C for this project was a poor decision but since there was not going to be any budging on that front I tried to make the job work out as good as possible.
It's quite interesting to see how far you can push C but at the same time you really have to ask yourself if you are on the right road if you find yourself doing things in such a manner.
Like Cello, the lib I wrote is a way to force the language to become another language, which always has drawbacks in terms of unintended side effects and long term support.
Probably better to switch to a platform that is closer to your problem domain (in this case, such as erlang, clojure or even go), as much as I liked tinkering with C it felt like we were making life harder than it needed to be.
I'm looking for standards / set of libraries / best practices for "modern" C development, but I've yet to find a comprehensive resource.
Stuff like typedefing a manual fixed sized int type to be cross-platform compatible, that books don't really tell you to do but are important and come up often.
I'd be okay with a small, well written example library too. Does anyone happen to know something like this?
edit: Ah, sorry if I misled you, that was just an example of the kind of tips and pointers I was looking for. Or weird bits like the linux kernel list_head. http://kernelnewbies.org/FAQ/LinkedLists Or common libraries like bstring that make life easy. Or even a single, comprehensive implementation of good data structures, since everyone seems to have their own vector.h and/or hash.h that fails to cover much other than their own use case.
You may want to study the source code of a well-written, modern C project like git: https://github.com/git/git
The fixed-size int finally got a permanent solution: #include <stdint.h>
If you are targeting autoconf/automake as your build system, that has a lot of built-in solutions to portability issues, like defining macros. It's not easy to learn, and I don't pretend to know it well, but when I'm compiling someone else's project, I'm always happy to see a configure script.
What I find annoying about libraries like glib is that they tend to impose their own style on your project by using their own typedef'ed types and such.
If you don't mind it, you can cobble together your own data structures from various open-source projects. Judy arrays are pretty fast, and you can use them in a variety of ways. Searching google for "c hash table" came up with a lot of excellent results, so try googling whatever data structure or algorithm you need, and chances are, you'll find something.
Glib/GObject is pretty good and seems to be what you're looking for: https://developer.gnome.org/glib/2.37/ It's not tied to the GNOME platform at all but rather contains building blocks such as hash maps, linked and array lists, heap allocators, string manipulation functions and so on. Personally, I find some parts of GObject to be over engineered and distasteful but most of it is solid utility.
Then you have https://wiki.gnome.org/Vala which is a whole new language built on top of C + Glib/Object whose main selling point is that it compiles to, and is totally compatible with, plain C code.
Do keep in mind that all of this, Glib, Vala, Cello and other "make C more like a high-level language" are basically hacks to workaround the fact that C is a very low-level language and lacks many powerful features. I believe one is much more productive using Glib + C than just plain C, but you are still less productive than if you had choosen a modern language in the first place.
As somebody with the same query, let me hop on - how does "Learn C the hard way" fare? I've got it mentally bookmarked the for next time I touch C code.
Pretty good, in my opinion. It's fairly comprehensive, and focuses a lot of time and effort into pointing out why things in C break and how to be preemptive about fixing them.
A fair bit of discussion around your standard exploits as well.
I'm a fan of BSD sys/queue.h and especially sys/tree.h; they are liberally-licensed header-only implementations of various kinds of linked lists resp. binary trees.
Wow, this is an impressive amount of high-level feel for relatively little preprocessor code (and a fairly lightweight C library underneath that). Holds together pretty consistently, which is hard to do with syntax extensions built on top of the C preprocessor, vs in languages with more convenient syntax-extension or macro systems.
I spent the last hour trying to get the example programs on the front page of the libCello site to compile on OS X (10.8.4). I discovered I was missing some include flags. This is what finally worked:
$ gcc -lCello -std=c99 -fnested-functions example.c -o example
Note that "var" is a typedef'd "void *". This essentially bypasses C's typechecker for libCello code. The author admits as such, and maybe that's just fine for what you need to do, but you should be aware of it.
I wrote a similarly-themed (although much less complete and much less useful) package for Go called 'Proto' which essentially sidesteps the static typechecker by mapping the 'base useful type' to `interface{}`, which is philosophically similar to `void *`.
I personally have no problem with it (other than the syntax needed to unbox/rebox values). I find that having the freedom to use a type system or not a very compelling feature in a language.
That being said, I understand why it might sit very poorly with some.
Though in Go, you can always use reflection to get back the concrete type in an `interface{}` type. In C, `void *` is pretty much all you get. This causes far more subtle bugs, IMO.
Source: someone who hasn't done much C and only a little bit of Go. So take it with a grain of salt.
random question: doesn't ObjC define 'nil' (or self?) as 'void*'? I did some ObjC coding 2 years ago and I remember seeing something like this and thinking: oh boy.
The ’id’ type is a void *. It is used extensively, whenever there are multiple possible return types. Even where inheritence could have been used to make it more speciffic.
After looking at the source, this appears to be a great beginners resource of how to build on top of C. The source is very concise and straightforward. I'm curious to see what will come from this.
Libcello has its share of latent macro bugs, but it doesn't seem particularly bad. However, I can hardly think of anything worse than debugging piles of half-broken higher-order C macros written by beginners.
People interested in real-world high(er)-level C programming should take a look at this book, "especially the class methodology in Chapter 4": http://www.duckware.com/bugfreec/index.html
Side note: this book would certainly be down-voted on r/programming but I expect more grown-ups here.
An interesting experiment, but even as the author states "it's a fun experiment". It makes things easier to read & understand for beginners, maybe, but he even states that it's not for beginners. If I have to be a C power user to use it, I imagine I'd feel more comfortable without it. Just my opinion though.
I think quite similar about this, its in some way like training wheels on a motorcycle.
But I think there could be quite interesting uses for Cello, namely you do your prototyping with it and then you trow out the library and refactor the code to pure C.
What is this? It claims to be a (GNU99) C library, but I don't see how this can be the case, considering all the non-C constructs in the sample code ($(), try/catch, foreach). So it this just a language of its own that is compiled into C?
To solve this problem, with GCC, you have the option of defining foreach as
{ typeof (xs) _xs = xs; \
for (var x = iter_start(_xs); x = iter_next(_xs); x != iter_end(_xs))
but then you need a corresponding endforeach to apply the matching }. If you just leave off the opening {, then you end up with a mysterious compile error on
if (ready(queue)) foreach(x, get_next(queue)) f(x);
since the declaration of _xs is not a statement.
And this kind of thing is why a "macro system" means something very different in C and in Lisp, and why Brad Cox wrote a compiler in 1983 instead of a macro library.
Oh, that's excellent! I don't know why the C9x for-declaration didn't occur to me as a possible solution, particularly since the macro was already using it. You should submit this as a fix!
$ is a valid function name according to gcc. try/catch/foreach are #define'd as part of it. As an example, foreach is defined to a for loop over an iterator, which is any Cello class with a couple of functions defined.
I'm guessing that the GNU99 qualification is important. GCC supports most if not all of the claimed features, or at least the low-level constructs required to create them.
This is the kind of functionality that D language is really good at. If I were to go beyond the fun bit of this project, i would have a look at D language.
I think it's more useful than D though, because I can just add a header file to an old C project and make things better and still have things interoperate.
Chello is nice proof of concept, but personally i'd like to see only one or two changes to C standard:
1. sizeof(function) -> would give user ability to copy functions around.
2. maybe new reserved keyword _Func -> function tagged with _Func would indicate that function must be compiled as function object (defined sizeof) and compiler needs to address fact that function may be moved around and used (relative addressing and i guess bunch or others problems that would arise). Only code, nothing to do with ELF or other formats.
Another interesting thing to do would be to, somehow, eliminate function pointers with _Func.
In any case, user would be responsible for creating environment for that (lambda?) functions, like manually setting mprotect or setting up stack (prologue.h & epilogue.h ???).
This looks very nice indeed. The main thing that will interfere with usability for me as a non-C guru is the lack of thread support. But I am really grateful for the effort since my "spiritual home" among programming languages is definitely the dynamic languages, yet I appreciate the need and beauty of C in many instances when performance is necessary. libcello's apparent optional static typing (the "var") is really nice -- it's one of the wonderful things about using Cython alongside Python.
So, because I am a nub in this stuff... When it says C library, does that mean anything that works with C (say a gui library for example GTK) will work perfectly fine with this? I would just change the syntax as required and call it good?
You can write regular C GTK+ code side by side with it, but you would need a wrapper in order to use most of the constructs with GTK+, or anything else not written with Cello's type system in mind (e.g. var w = $(GTKWidget) would not work for multiple reasons).
It seems interesting but I did not get it. Does it try to add some c++ syntax sugar? Does it have performance advantage over c++ for similar functionality?
I am not a fan for C++ syntax, can I still get something from cello?
loving it, definitely
I was playing with these sort of things these months and eventually I was ending with something similar, but that is far away what I was doing.
I think I will contribute to this lib if i can instead of continuing my shit :)
i hope this get more and more popular.. in a way that the next natural step would be implementing a special parser for it in GCC and Clang (as some sort of C subset)..
Took a look: interesting library. Documentation looks good. When following the link to 'readthedocs.org', it took me to http://libcork.readthedocs.org/en/0.10.0/ which has a message on top that says it's not the newest version. Editing the URL gets me to http://libcork.readthedocs.org/en/0.11.0/. Not sure where it should go, but might be worth checking. Thanks!
It seems to be targeting the same market as Go or Haskell OO more than that of C++. A class in this model basically means 'definitions for a set of methods on this type', so it's more like what you understand as an interface.
For example, there's a "show" class that converts your object into a string. Any object that implements the necessary methods can thus be printed.
Another example is an iterator class. If you implement a couple of methods (move the cursor to the start, increment the cursor, see if the cursor is at the end, and get the currently pointed-at item), then you can be iterated over by a foreach loop.
Unlike Go, though, with Cello you seem to need to explicitly specify the functions used to implement each class.
They were referring to haskell typeclasses, which is a way to dispatch to different function definitions based upon your data's type - which, if you squint, looks sort of like OO without inheritance.
Yeah. It's not clear to me that you can just do arbitrary overrides, though, like this type is the same as that one except with one function replaced. I can imagine building such a system out of Haskell primatives, though
Identity, for one. Haskell is value-oriented. If two values have equal parts they are considered equal, even if they might be stored at different addresses. Barring the FFI (which is an unsafe extension), there is no way to distinguish otherwise equal values. Thus, Haskell has no concept of object identity which is necessary for a system to be object-oriented.
- As I said in an earlier comment, "var" is just a typedef'd "void *". The downside is that libCello code is essentially untyped, but the upsides are that the C preprocessor is now enough to do the processing needed for the rest of the macros in the language, and that you can still mix regular typed C with libCello code for a "best of both worlds" mix.
- Looks pretty, right? What you're responding to is not just the nice non-capitalized macros and the $ keyword, but the syntax highlighting in his examples. Fire up a text editor and write you some libCello code without this highlighting and it probably won't feel as nice.
I'm extremely interested in the idea of taking the syntax highlighting, formatting, and code completion OUT of these specialized IDEs and plugins and into some kind of standard "bidirectional channel" between a language processor or compiler, its macro system, and the user's editor of choice.
We should be able to make entire DSLs and specialized syntaxes that not only compile, but are able to provide rich information to the surrounding development environment in a standardized way. I'm not alone on this. F#'s Type Providers do exactly that. But imagine being able to control not only the "intellisense", but also the syntax highlighting, argument/return tooltips, documentation, preferred formatting, snippets, etc.
And by "surrounding development environment" I mean everything from the command line to vim and emacs all the way to Sublime Text, Eclipse, and Visual Studio. Even github! Why do you have to register a new syntax highlighter on github for a language and hope they turn it on?