Hacker News new | past | comments | ask | show | jobs | submit login
HTTPie: A cURL-like tool for humans (github.com/jkbr)
312 points by jkbr on July 18, 2012 | hide | past | favorite | 83 comments



This is a nice curl and wget replacement that handles a bunch of modern use-cases without a lot of hard-to-remember command-line flags.

That said, there is a broader problem of "hard-to-remember command-line flags" which I have personally solved using snippet management (I use notational velocity or command history, whichever is handiest).

There is no doubt httpie's interface is a lot better, but it creates another problem (again, which is somewhat universal) of installing, learning and remembering to use a new tool. This is a non-trivial problem that is a key concern for anyone evaluating a new tool, and it's a problem that only really gets solved with ubiquity.

Finally, an observation that so many of our "traditional" command line tools pay no attention to usability because, at least back in the day, the problem they solved was hard. People had a choice: either put up with an (admittedly) bad interface or write their own version in C. The individual cost of learning a bad interface outweighed the cost of rewriting the tool, and so standard tools were born.

And now, decades later, new generations are stuck having to learn needlessly obtuse interfaces to standard tools. We have a situation where newcomers pay the cost of developer UI laziness in perpetuity. This is, of course, a terrible outcome and it's projects like this one that are trying to change it.

So I applaud the effort and hope it catches on, become ubiquitous, and I can take the curl and wget snippets out of NV.


> People had a choice: either put up with an (admittedly) bad interface or write their own version in C.

That's totally false. libwww-perl was created 17 years ago and predates curl by a few years. There's nothing new about capable scripting languages and Unix.


DELETED

This was a warning of a hazard to navigation, when a more diligent effort to remove the hazard is called for.


>on at least some occasions the Python client gets non-standard results, while curl and (Guido, forgive me) PHP do fine

Ok, young one, here's the thing. If you see a problem like this then file a bug. Ideally write a test case. And if you're an overachiever, dig into the code and fix it. Any bug in an http client that reposts data is incredibly serious, and needs to be fixed.

The other value in doing this is that you don't spread Fear, Uncertainty, and Doubt - or FUD as it is often referred to. FUD is usually ascribed to big companies trying to discourage using a competitors product, but it can also be spread by the ignorant or misinformed inadvertantly. No offense, but I think that's the case in this case, because Python is not a niche language, it's used (and it's http libraries are used) by a lot of people, and the error mode you describe is very, very serious.


No offense taken.

I did report this to the API maintainers, who couldn't make heads or tails of it. I didn't file a Python bug because I honestly don't think Python is the problem.

But you are right, it is more responsible to pursue this until fixed rather than raise warnings.


Do you have a link to the report? I'd be interested to read the discussion.


No. I will be following up on this, if you send me an email (in my profile) I'll try to let you know when an analysis is available.


FYI, the `email` field in your profile is not publicly visible. If you want other people to be able to email you, you need to put it in your `about` text.


+1. Probably curl also had bugs before (and still does). These get fixed over time, but that doesn't mean improving the interaction isn't worthwhile in the first place.


Everything on HN is just "hey, learn to use this complicated complex thing - in just 30 minutes! You'll be so productive with all your creative startups!"

"Learn VIM essentials in this blog post!" "Never bother reading 'man curl'!" "Learn the basics of C in three easy steps!"

Sometimes HN feels like a lifestyle magazine for people who dream of being PG.

EDIT: Don't get me wrong, I too dream of having the same succes as PG. Why else would I be writing here?


Also not sure what your complaint is. Working at the right level of abstraction is fundamental to being a good engineer.


There is no criticism really, I agree with you. But was just feeling uneasy and vented it. I welcome downvotes to my GP comment.


What's your criticism exactly? What kind of content would you personally like to see more of?


None, merely an uneasy feeling. If you ask me personnaly, I really enjoy articles about something technically extraordinary that OP made/wrote about, amongst other things.


"for humans" seems to have no better meaning than "for the OS X sensibility".

In other words: this is a style change rather than a productivity gain. And the superiority of the style is not obvious - unless you just HATE the style of existing tools and need to be set apart.

Most humans don't operate the command line or write scripts to begin with. Those who do, usually can handle wget "http://foo/bar. It took me all of a few seconds to start using wget and all of 10 minutes to have access to fancier features. (But the truth is that a certain level of complexity really just wants a script rather than ad hoc commands).

So here is a new tool, and it looks nice. But it doesn't at all relieve me from having to learn syntax and conventions - I still have to go to a doc/manpage and read that same kind of technical prose. So the only effective difference is that now I am using different punctuation, like @filename and -b. But the use of this "@" character is not really consistent with anything else.

So the tool is fine and I am sure people will use it but the competitive advantage is incredibly thin and the project smacks of NIH.

If curl and wget are not for humans then what are they for? People who do not have that magical design sensibility. Lame code-monkeys without vision, who are not creative and different. Soulless agents of the man.

This emphasis on branding over substance irks me quite a bit.


Gosh, what a horrible comment! The simple fact is that curl and wget's interfaces are bad, and everyone who learns to use them spends that 10 minutes. And then, if you're like me, respends that 10 minutes when I need to do something fancy again. If a new tool were to save 5 minutes (which this one does), that's 5 minutes saved on every use, which is probably a total of a few hours for me personally, and spread over the population of future users, is a few lifetimes.

Rather than admit that even incremental usability improvements are not only useful, but continue to pay dividends long after the tool is produced, you lambast the author and the effort.

Not cool.

P.S. You should watch Bret Victor again, talking about how much easier it is to crush an idea than to support it and nurture it. http://vimeo.com/36579366


All the competing tools are not "for humans." Who are they for, then? If you don't see the negativity in that sales pitch (and your own complaining about other people's creations, which were also the products of hard work) then I don't think you are being honest with yourself.

If this tool actually didn't require me to go to a man page and read lots of flags (.... exactly like every other tool) then I might be more enthusiastic about its advantages. But I don't see advantages. I just see yet another tool. Well, that's fine... use what you want. But I don't agree that it is superior in any clear way, and that's really all I said.

People who are really capable can handle and learn from critique and don't require constant sugarcoating because their minds are more at the level of problem solving than ego defense.


curl's interface is a PITA for testing JSON APIs. HTTPie's isn't:

  http POST http://localhost:3000/person/create name="James"
The "for humans" bit signifies the author has taken care to simplify the interface for practical use, rather than for shell scripting.


Yup, I was excited to see that.

I have a text file with examples of testing different kinds of API calls w/ curl, and I have to refer to it all the time because the syntax is so easy to get wrong if I just type it out. I've also had to waste lots of time supporting other developers using our API, who got tangled up in the curl syntax in the examples I provide them.


Your comment is an ad hominem attack where you falsely accuse someone of making an ad hominem attack.


You know, I was about to defend myself, but you may be right. My tone of outrage at the slurgfest being wrong was itself rather wrong. If he didn't see the long term incremental improvements this tool offers, that's nothing to be outraged over. People make mistakes.

That said, I think it's a stretch to call this mistake of mine an ad hominem attack. It doesn't fit my normal understanding of such an attack - there was no name calling, etc. And certainly I didn't accuse him of an ad hominem.


Thanks for pointing that out. I made slurgfest's comment a proxy for slurgfest when I interpreted your comment, and that is indeed a stretch. I agree that neither of you were all that harsh.


curl and wget might not follow Unix conventions all that well, but from the look of it neither does HTTPie.

Thus the GP's point stands that this program offers little more than a stylistic change. And a bit of pretty printing, which you could (ideally) pipe into some syntax highlighting script or an editor to get the same effect.

Don't read the docs with this thing, for example, and you'd wonder why it defaults to JSON encoding and not the default HTTP POST format. And so the 'simple' version is now dependent on a trend and not the HTTP standard. In fact, even with a modern API I don't recall ever having to serialise my data before posting it. So why this?

That's a few minutes wasted on RTFM already. What else will there be?


Still, you pay a one-time cost of reading the manual, then the ongoing benefit of using a nicer UI. With cURL you pay the one-time cost of reading its manual, but also pay the ongoing cost of dealing with its unintuitive, verbose, and generally less human-friendly UI.

JSON encoding is a sensible default these days for a tool to be used by humans, though I agree there's a good "principle of least surprise" argument for defaulting to plain HTTP POST. It would also be nice if it had a setting between "interpret response as JSON" and "uglify the response", but OTOH those additional formats are good opportunities for user-contributed patches.


You're totally off here. curl,wget, etc are tools that make overly complicated the most common use cases. There are 900 million command line flag options, so figuring out how to do a post & set a particular header can be 10 minutes. 10 minutes is an awfully long time for learning how to perform an HTTP request! At least on the github page I can learn the 5 things I'd actually want to do in about 3 seconds. And it colorizes the json output to make it more legible? Sweet. That's called productivity.

Interestingly enough, people seem to search for curl tutorials much more than wget: http://www.google.com/trends/?q=curl+tutorial,+wget+tutorial...

Having something that makes life easier with less complication IS what tools are all about. That's called evolution. Why have the web when you could have gopher? Because it's _better_, even if you can 'accomplish the same things.'


> Most humans don't operate the command line or write scripts to begin with.

I'm not a huge fan of cURL, but most people who use cURL don't use the command line either. They use the cURL library and access that functionality through a high level language (PHP, C, C++, whatever).


That's true, but it's no better there, either.

Here's some sample curl client code in PHP.

      $c = curl_init("https://someurl/some/api");
      $msg = /* some data here */
      $opts = array(
        CURLOPT_POST=>TRUE,
        CURLOPT_USERPWD=>"<password string>",
        CURLOPT_HTTPHEADER=>array("Content-type: application/json"),
        CURLOPT_POSTFIELDS=>$msg,
        CURLOPT_SSL_VERIFYPEER=>FALSE,
        CURLOPT_RETURNTRANSFER=>TRUE
      );
      curl_setopt_array($c, $opts);
      $d = curl_exec($c);
      curl_close($c);
Every option in that ugly argument array corresponds directly to a commandline option.


On the other hand I guess you could probably write a nice library that uses curl under the hood, here it's just a case of ugly design.


Indeed, I do exactly that: http://requests.ryanmccue.info/

I completely agree on cURL's design though. It's painfully obvious when working with the cURL bindings in PHP that it didn't have much thought put in to adapting it to PHP's constraints and is essentially just the command line client in a "nice" wrapper. It ends up being a huge pain to work with.


I couldn't disagree more. I wish I could disagree more, tough. Your comment shows a complete disregard for User Experience and User Interface, and this disregard is the reason why so much stuff simply "sucks" nowadays.


If by "for humans" you mean "for programmers who mostly use JSON". I have to say I'm not sending JSON with cURL too often, compared to any other payload. And when I do send JSON it's more complex than a flat set of keys and values.


There is more to it than that. It provides an expressive syntax to construct all sorts of requests. You can submit forms, upload files and set headers without having to use flags. You also get syntax highlighting, ability to pipe in request data, etc.

If you are sending complex JSON, it's probably stored in a file or it's the output of another program:

    http PUT httpbin.org/put @/data/test.json

    http -b localhost:8888/couchdb/ | http PUT httpbin.org/put


I don't really think curl is much harder, e.g.

curl -X PUT -d @data/test.json httpbin.org/put

It also has the advantage that it supports all the options you might ever need, for example http authentication and proxies are often useful.


It has the disadvantage that it's not at all obvious what it does unless I open up the man page.


HTTPie also supports proxies and HTTP auth (see --proxy and -a/--auth).


Doesn't support socks 4/5 proxies. :)

Requests and urllib3 have a long way to go to be complete competitors with cURL.

P.S. Good job nonetheless. Seems like a good idea to make a specialized HTTP CLI client for JSON/RESTful services.


Why not use JSON syntax for JSON instead of introducing your own, though?


You can still use regular JSON syntax, if you like:

    echo '{"foo": "bar"}' | http url
The reason for having the simplified one is that it's less verbose and usually doesn't even require you to quote it:

    http url foo=bar


Yeah, most of us would forget to single quote the JSON. It is definitely more verbose too - I agree with the command-line compatible simplification.


I think the tool looks useful. I'd appreciate something like this for manually poking web servers when conducting a security assessment.

[Edit: removed HN snarky comment]


The UI for curl is awful (--request to change the method??) and wget's is only slightly better, but they do have the advantage of ubiquity, and it's often useful to email/Skype complete curl or wget command lines about the place to explain how to use an API, or demonstrate problems.

(e.g. Stripe and others document their API in terms of curl commands: https://stripe.com/docs/api.)

I do wish the curl UI was better, but I can't see it being trivially replaced. (It's a similar issue with git: bad UI, but every git question and answer is described in terms of the CLI, so even if you prefer a GUI client, say, you still need to be able to formulate your problem in CLI terms for anyone on stackoverflow to understand you.)


It is possible to have another tool become ubiquitous (and in this case, it really would be for the best to have another tool become ubiquitous). In order to do that, though, you have to have a better tool, so this is a great start to making that happen.


Curl after all mostly replaced the earlier wget.


Except wget -r, which is still widely used and has no equivalent functionality in curl.


You can also change the request method with -X.


I'd also recommend Curlish (http://packages.python.org/curlish/). It performs nice JSON highlighting and also handles OAuth 2.0 token authentication.

It simply wraps curl(1), so all of the familiar arguments and recipes continue to work just fine, as well.


It looks great! I see no reason to slight cURL, though. Its CLI was intended for humans.


It's not meant to be an insult on cURL. cURL is a great library/tool and supports way more than HTTP, but the command line interface simply isn't as convenient for common HTTP as it could be. The "for humans" slogan is borrowed from the underlying python-requests library and is meant to communicate that good UX is one of the top priorities of the project. Glad you like it!


I don't think it's the most obvious kind of good UX, but rather good UX for people who value expressiveness, like ruby programmers. The items that change meaning based on symbols are quick to type, but it comes at the cost of clarity. curl's paramters aren't very clear either, and since it supports more than just HTTP, it's harder to go through the man page with all that it supports. I think there's still room for one that uses option names/letters to differentiate between headers and body properties rather than symbols.


It is UX with the inflection of Apple's Human Interface guidelines, which have always been a mix of technical guidance and branding/marketing guidance (as opposed to pure, wonky, research-based, specifically tailored ergonomics, which defines usability relative to specific tasks and audiences, some of which may be highly technical and appreciate complex tools).

Apple discovered that words like "usability" and "human" are very powerful ways of framing competing products as unusable garbage not fit for human consumption. And they worked very hard and were very successful at fitting these words to their brand. This cuts the legs off any competing marketing. If you were a competitor the best you could say to this was something like "we have more games" or "we are cheaper" or "we have higher clock speeds" or even "you have more choice." Meanwhile the audience glazed over and felt threatened. Apple told the same audience that actually they were better than those hobbyist losers wasting all their time because they had bigger issues to worry about, they were discerning and frankly they were cooler.

Marketing is a high art and Apple is sitting on top of huge mountains of money after fighting Total War for decades. Good for them.

When it comes to evaluating libraries and command line tools, invoking the words 'human' and 'usable' still indicates that the developers and users of the old one are losers focused on irrelevancies, with huge amounts of time to waste on tools that are just plain unusable for anybody and unfit for human consumption.

Anyhow, there is more to this UX than just what is nice to use.


You've done a wonderful job of expressing my sentiments on this issue. What annoys me even more is the degree to which projects like Gnome copy Apple's design decisions that make their tools less effective for "Power Users".

By the way, the "Power Users" term is used to marginalize people who are willing to learn to use their tools. The results of using this term in this way have IMHO slowed the growth of the computing industry.


Thanks for the feedback!

I tried to come up with syntax that would make the most common tasks (i.e., sending JSON objects, submitting forms, setting headers, etc) as easy as possible and also feel "natural". The reason for the chosen style is that it quite corresponds to the actual HTTP request being sent. For example, if you want to send a PATCH request with a custom header and a form field data:

    PATCH /patch HTTP/1.1
    X-API-Token: 123
    Host: httpbin.org
    Content-Type: application/x-www-form-urlencoded; charset=utf-8

    foo=bar
You can simply copy the header (X-API-Token: 123) and the data (foo=bar) and paste it to the terminal:

    http --form PATCH httpbin.org/post X-API-Token:123 foo=bar
It's not as obvious as '--request PATCH --header X-API-Token:123 --form foo=bar', but on the other hand, the command doesn't include almost anything that wouldn't become part of the actual request, which makes it short and easy to focus on what's important.


Hmm, that makes more sense. If it isn't already in the README or man page, it might be a worthy addition.

I suggest you add an option where you can start off with a bare request.

Is there a way to have it construct the query string when you are doing a GET request? Also is there a way to have it construct a query string when you have a JSON or form body? Might be something to add below the description of items, as something that doesn't fit into that list but is related. Perhaps -q page=2 -q rpp=20 would be a good way of saying it.


> Hmm, that makes more sense. If it isn't already in the README or man page, it might be a worthy addition.

Good point, I'll add it to the README.

> Is there a way to have it construct the query string [...]

Not yet, but it's being already discussed here:

https://github.com/jkbr/httpie/issues/61


You could just use our curlish wrapper (http://packages.python.org/curlish/) which just wraps curl directly. You get a nicer UI if you want but still the full power of curl.


Another alternative, in Ruby, is HTTY - https://github.com/htty/htty


One difference to note is that HTTY puts you in a REPL-like environment instead of being a one-shot command like curl.


This is a great idea. Sometimes when something is not working the last thing I want to do is poor through the curl man page before I can even get started figuring it out.


I was looking for something like this a while back and found a useful firefox plugin called Poster (https://addons.mozilla.org/en-us/firefox/addon/poster/). It's useful for testing a RESTful api without creating any front-end code to handle the requests. Or anything you can do with cURL just simpler.


Hope a little bit of self-promotion isn't frowned upon - I have a similar app [1] on the Mac App Store. One nice thing it does is allow you to save a request in a a file, to be re-used later. A lot of folks I've talked to have used this to share test cases in a QA organization, and I think it can be kind of handy (although, the free browser-based ones are quite good, too).

I'm working on adding conversion to/from cURL and wget commands, which could be helpful when working with APIs that are documented with cURL examples.

It costs $2, but I'll give anyone who asks (well, up to 50 people at least) a free copy - my contact info is in my profile.

1. http://itunes.apple.com/us/app/graphicalhttpclient/id4330958...



A coworker recently pointed us to yet another one of these, Dev HTTP Client, which has worked really well for us.

https://chrome.google.com/webstore/detail/aejoelaoggembcahag...


My main difficulties in using wget are in organizing output location (things like -nH, --cut-dirs, -P), choosing between -nc / -c / default (rename), error / retry policy (-T / -t), logging (-a vs -o), etc.

This tool doesn't really solve any of my actual problems. YMMV. It's less a cURL replacement than a web API invocation tool.


I like it, though I got around curl's painful syntax by wrapping it in a few shell script. For example, here's my api_post script (meant to be used like "api_post users/123 first_name=Foo last_name=Bar") (pardon my incompetent shell scripting and redaction of company internals):

  #!/bin/sh
  
  resource="$1"
  shift
  
  declare -a post
  while [ "$1" ]; do
      post=("${post[@]}" "-F" "$1")
      shift
  done
  
  if [ -z "${API_BASE:=}" ]; then
      API_BASE=http://localhost:3000/api/v1/
      echo "No API_BASE set.  Using $API_BASE."
  fi
  
  verbose=""
  [ -n "$API_VERBOSE" ] && verbose="-v"
  
  if [ -z "${API_COOKIES:=}" ]; then
      cookies=~/.api.cookies
  else
      cookies="$API_COOKIES"
  fi
  
  curl -0 -k -s -S $verbose -b "$cookies" -c "$cookies" -X POST "${post[@]}" "${API_BASE}${resource}"


I was prepared to not be impressed, but this looks nice to use. Installed.


This is a good tool. I wish there's an editor-integrated interactive http tool.

OT: Is there an Emacs package that can do interactive invocation of http? Like having a text buffer to hold all the urls. Hitting Ctrl-E on a url to invoke it, and display the http response headers and result on separate buffers.


OT: You just pretty much summed up restclient.el - https://github.com/pashky/restclient.el


That's fantastic. Thanks for the find. Emacs once again delivers.



For website scraping, something like casperjs/phantomjs or selenium are more suitable, since they emulate a browser and evaluate javascript, which curl cannot, and it is not clear if this tool can.

What are the use cases where curl is more suitable than phantomjs or selenium?


APIs. Lots of HTTP usage is not browser-related.


phantomjs doesn't just emulate a browser, it uses WebKit.


If you are doing cURL calls as a part of testing functionality, you may want to consider using a tool like Frisby.js ( http://frisbyjs.com/ ) to create a suite of automated tests that involve HTTP calls.



Not applicable. This is not a new standard; merely a new tool.


Replace standard with tool. Still applicable.


So by that rationale, we should have stuck with Mosaic, and Internet Explorer, Firefox, Chrome, Safari, Opera, and Konqueror are all bad things?

How about the various mail clients, office products, ftp clients, torrent clients & servers, programming languages, power drills, clothing lines, gasoline engines, kitchen knives and so on?

Tools are about innovation. Standards are about locking down feature sets so that tools can interact in a common way. The whole point of the xkcd comic is that too many standards makes it difficult to make tools, and adding yet another one-standard-to-rule-them-all usually backfires.


API docs often show examples with cURL. If people start writing examples with httpie, that's yet another thing to learn.


yeah, couldn't agree more


I really like how writing little tools like this in Node is so simple:

    node -e require('request')('http://www.asdf.com/').pipe(process.stdout)


A similar project is htty, which presents an interactive console UI to make and inspect http requests.


This is fantastic! Thank you!


I find the implication that people who know how to use curl are inhuman to be insulting and arrogant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: