Hacker News new | past | comments | ask | show | jobs | submit login
Making your own web debugging proxy (twiinsen.com)
87 points by ejcx on July 24, 2016 | hide | past | favorite | 21 comments



You can do that using MiTM Proxy as well, as explained here: https://dadario.com.br/mitming-ssl-tls-connections/


I use mitmproxy and mitmdump a lot. I really recommend it. Someone else said it's not easy to hack on, I do not agree. It's a really approachable project and great if you're doing anything HTTP related. That said, there are some pain points:

* libmproxy is not recommended for external projects. Instead you should use "inline scripts" (embedding functionality in mitmproxy). This is a pain point for me since I wanted to work on the captured streams in an external program that already existed and I shouldn't need to run mitmdump to do that work. The dumped streams are not in a standard format either, they're serialized python objects and dumps from different mitmproxy versions sometimes break the format.

* Performance. Having more than a couple of concurrent requests at the time tends to eat CPU. Running the requests of one browser through it is fine. Running multiple browser instances (e.g., using the new headless library in chromium for automated testing) through it, continuously performing requests is not a good idea. Memory footprint is also quite high, but is not a limiting factor for me.

I would like it if the dump format was a standard format and if libmproxy was a stable API.


Mitmproxy author here. It is kind of interesting that one of the best ways to get feedback for your software is reading random comments on the internet - thanks for that! :)

> libmproxy is not recommended for external projects.

This is true (for the reasons outlined in [1]), but your use case is the reason why we also offer the libmproxy API. That being said, you'll see improvments here in the next release (and hopefully a lot of stability afterwards).

> The dumped streams are not in a standard format either

We would love to use JSON, but JSON does not really work with streaming. We use tnetstrings (not serialized python objects) instead, and we do schema migrations for the last 5 releases now. We have an example on how to read dumpfiles in Python in the repo [2]. :)

> Performance

Thanks for bringing that up. Scaling beyond a few concurrent users is not a design goal for mitmproxy currently - otherwise we probably should start rewriting it in Rust/Go. If you need anything large-scale, the submission here is vastly superior. :)

[1] http://docs.mitmproxy.org/en/stable/scripting/mitmproxy.html [2] https://github.com/mitmproxy/mitmproxy/blob/master/examples/...


Performance is actually the reason I started this project.

A friend of mine was creating a product based on mitmproxy for a client and was running in to performance projects. He asked me for advice and I pointed him to just how little work it was to do exactly what he wanted with openresty, instead of mitmproxy.

If you want everything, mitmproxy might be for you. If you want fast, minimal, hackable, and not a pile of python then openresty and my approach might be for you.


My problem with using mitm proxy is that it isn't as easy to hack on.

I'm an nginx person, so learning a new python code base to hack on features I want is way harder than having something simple I can add on to, as needed.


I found easy to get started, it's as simple as the post. The real benefit I see is to plug arbitrary python code to hack requests and responses. It's very good, but I've noticed some crashes in my experiments too.


That's what the benefit of using openresty is. I can print and hack on http requests in an already mature ecosystem.

It's two ways to do the same thing, but I like my way, and mine uses software that is production ready as a proxy :)


Open testy is awesome for lots of things. However, if I'm doing HTTP debugging, I use curl, Charles, ZAP ( https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Proje... ), tcpdump, and wireshark, in order from simplest to most complex problems.


I'm confused. Mitmproxy shows me request/response bodies and lets me edit and replay requests. Those seem like fundamental features (Fiddler, Burp, mitmproxy all seem to have them). I don't see how this is done with nginx reverse_proxying and logging, or is that coming in part 2 or 3 maybe?


>They all had good features, but none had all of my desired features.

Many intercepting proxies like The Fiddler with FiddlerScript and the Burp Suite through Burp Extender can be extended to have any feature you want by writing your own code or leveraging someone else's. Personally the only time I've found myself thinking I might need nginx for a debugging proxy is when I need scale. I'd rather use something that's close enough, write stuff where I need to, then focus on doing really cool things with them like finding vulnerabilities for fun and profit.


i've gone through the very basics of burp suite before but never effectively used it to test much.

so, i did a tutorial search for myself, and if anyone may benefit here's a text one with screenshots: https://www.pentestgeek.com/web-applications/burp-suite-tuto...

and some official video tutorials: https://portswigger.net/burp/tutorials/


If you just want a quick proxy to inspect traffic, apache with mod_dumpio¹ always seemed the quickest and easiest way to do it: just proxypass your traffic and

     LogLevel dumpio:trace7
     DumpIOInput On
     DumpIOOutput On
and all your traffic is in the log files.

¹http://httpd.apache.org/docs/current/mod/mod_dumpio.html


Im all in favour of owning your stack completely, but sometimes you need to be pragmatic for the use-case.

I occasionally need to debug traffic. Not often, but occasionally.

For me, the ~$5 on Cellist (http://cellist.patr0n.us/index.html) was a no-brainer.


There's also mitmproxy [0] Charles (which apparently was the inspiration for this post) [1] and Fiddler [2].

0: https://mitmproxy.org

1: https://www.charlesproxy.com/

2: http://www.telerik.com/fiddler


Hey. About HTTPS proxy, i can offer you a better way, rather than creating your own CA, generating certs for any domain which is too much of work & configuration + compiling OpenSSL. I have done that already, as free service working on this address: https://ca.parasite.io You can easily implement with LUA module to download certs for any domain & download it as Zip or JSON or pfx. Contains all files you need. root, intermediate and target cert with private keys of course. As the owner/developer, that domain and service is going to work for years at least till 2027 (my root cert's expiry date).

Note: Created certs has a 60 mins of cache (nginx) to improve performance. You don't want to download each certificate for all static files in a single request.


Is this what it looks like? A service asking people to download and install a new root CA certificate?

Don't ever do that.


As in homepage it states that strictly for developer's use. And maybe I should add for the other's who are not developer not to install root certificate. Thank you for reminding.


Fiddler does this, but locally - I should check how long the expiry is, though.


Lots of tools generate CA certs locally. I don't have a problem with that. This is a tool that asks you to download a new root CA cert from a website. That's crazy.


We're in agreement there.


thanks, this is really neat. i was thinking of something like this and my only idea was to write my own from scratch. while that might be educational it was daunting and i was guessing would have limited support and bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: