Speaking of hidden Python tools, I'm a big fan of re.Scanner[0]. It's a regex-based tokenizer[1] in the `re` module, that for reasons is completely missing from any official documentation.
You give it a pattern for each token type, and a function to be called on each match, and you get back a list of processed tokens.
Importantly, it processes the list in one pass and ensures the matches are contiguous, where a naive `re.findall` with capture groups will ignore unmatched characters. You also get a reference to the running scanner, so you can record the location of the match for reporting errors.
This is quite an oversimplification. Some modules have no active maintainers and take time from the small team that work on CPython. Some modules are deprecated and removed to make it possible to allocate time on other improvements, and they went back on some removals when users came forward showing that they were still needed.
> They regret including most modules… it seems they regret making python altogether instead of sticking with C? :D
I also find it odd. Python would probably be a little known language without the huge batteries included by default. It's invaluable when you're working in a environment where you can't fully control what is installed, what is the case of most people at work. I believe this crusade endangers the language long-term.
You're reading a ridiculous mischaracterization of reality (~20 deprecated modules [1] out of the ~300 PSL modules [2] is "most"?). Of course you find it odd.
It's the trend that is a bit worrying. It's definitely been floated more than once that the stdlib should be fundamentally gutted. It has not become a reality yet, but there are some loud voices advocating to just chuck most modules as a principle into pypi.
(1) Not letting little used modules that pose maintenance problems drive up the cost of maintaining Python, and
(2) Not forcing actively used, actively developed modules to be limited to the core language upgrade cadence (and not forcing users to upgrade the language to get upgrades to the modules.)
Ruby I think has a decent approach to this (particularly, one that deals with #2 better than just evicting things entirely from the standard distribution) with “Gemification” of the standard library, where most things that are moved out of the standard library aren’t moved out of the standard distribution, but into a set of gems distributed with the standard distribution but which can be upgraded independently.
Maybe I've been out of the loop for the past couple of years since I've been writing less Python and don't follow Twitter drama, but IIRC none of the "stdlib is where module goes to die" crowd has ever advocated a fundamental gutting of the existing stable and widely used modules in PSL.
"Had possibly been added before it was fully baked" is not the same as "we should never have added anything like it", and neither is "we have a much better solution but no one uses that because asyncio is in stdlib which we're now stuck with, for better or worse".
You'll be able to do this soon with the Rust regex crate as well. Well, by using one of its dependencies. Once regex 1.9 is out, you'll be able to do this with regex-automata:
use regex_automata::{
meta::Regex,
util::iter::Searcher,
Anchored, Input,
};
#[derive(Clone, Copy, Debug)]
enum Token {
Integer,
Identifier,
Punctuation,
}
fn main() {
let re = Regex::new_many(&[
r"[0-9]+",
r"[a-z_]+",
r"[,.]+",
r"\s+",
]).unwrap();
let hay = "45 pigeons, 23 cows, 11 spiders";
let input = Input::new(hay).anchored(Anchored::Yes);
let mut it = Searcher::new(input).into_matches_iter(|input| {
Ok(re.search(input))
}).infallible();
for m in &mut it {
let token = match m.pattern().as_usize() {
0 => Token::Integer,
1 => Token::Identifier,
2 => Token::Punctuation,
3 => continue,
pid => unreachable!("unrecognized pattern ID: {:?}", pid),
};
println!("{:?}: {:?}", token, &hay[m.range()]);
}
let remainder = &hay[it.input().get_span()];
if !remainder.is_empty() {
println!("did not consume entire haystack");
}
}
A bit more verbose than the Python, but the library is exposing much lower level components. You have to do a little more stitching to get the `Scanner` behavior. But it does everything the Python does: a single scan (using finite automata and not backtracking like Python), skip certain token types and guarantees that the entirety of the haystack is consumed.
Yes, as I said, the APIs exposed in regex-automata give a lot more power. It's an "expert" level crate. You could pretty easily build a scanner-like abstraction and get pretty close to the Python code.
I posted this because a lot of regex engines don't support this type of use case. Or don't support it well without having to give something up.
Interesting! Not often using either crate, this example looks like something for which I might usually look to nom. Is there a reason I should consider using regex for this use case instead (if neither is a pre-existing dependency)?
I don't use nom. I've tried using parser combinator libraries in the past but generally don't like them.
That said, I don't usually use regexes for this either. Instead, I just do things by hand.
So I'm probably not the right person to answer your question unfortunately. I just know that more than one person has asked for this style of use case to be supported in the regex crate. :-)
> completely missing from any official documentation
To be fair, most things are missing from the official documentation. When I learned kotlin, I read through their official docs, and knew about most language features in a day. When I learned python, I constantly got surprised by things I hadn't seen come up in the docs. For instance decorators was (still is?) not mentioned at all in the official tutorial.
The tutorial is not supposed to cover all language features: "This tutorial does not attempt to be comprehensive and cover every single feature, or even every commonly used feature. Instead, it introduces many of Python’s most noteworthy features, and will give you a good idea of the language’s flavor and style."
But then I don't know how you're supposed to learn the features that are not in the tutorial. You can have a look at the table of contents of the standard library documentation for modules that might interest you, but that doesn't cover language features. Those are documented in The Python Language Reference, but that document is not really suited for learning from.
There are lots of websites and Youtube channels and so on, but you have to find them, and filter out the not-so-good ones which is not easy, especially for a beginner. I think there is room for some kind of official advanced tutorial to cover the gap.
I agree. Wish Python could been better with the documentation. Its a bit absurd that things feel more clear and simple reading Rust documentation than Python documentation for me, given that Python actually is a lot more simple and clear (for me).
Woah, thats a pretty cool feature! I allways feel a bit dirty trying to do anything like that manually (usually involving a string.split(",")[0][:2] etc, just asking to break).
Curious cat - had you considered using Antlr4 and a Python visitor/listener? (Were you aware of antlr?). Depending what you're trying to do with a regex tokenizer, it might be suitable.
It also uses the file as a module, not a script, which means suddenly relative imports works the root dir and the cwd are the same, and it is added to sys.path.
This prevents a ton of import problems, albeit for the price of more verbose typing, especially since you don't have completion on dotted path.
It is my favorite way of running my projects.
Unfortunalty it means you can't use "-m pdb", and that's a big loss.
Not just relative imports but also (properly formed) absolute imports. For example, if you have a directory my_pkg/ with files mod1.py and mod2.py, then
# In my_pkg/mod2.py
import my_pkg.mod1
will work if you run `python -m my_pkg.mod2` but will fail if you run `python my_pkg/mod2.py`
However, the script syntax does work properly with absolute paths if if you set the enviornment variable `PYTHONPATH=.` (I don't know about relative paths - I don't use those). That would presumably allow pdb to work (but, shame on me, I've never tried it).
I also develop and run projects this way. I really, really enjoy it. It's a very pleasant experience, on both the development side and execution side.
I'm relatively new to Python (used it for ~1 year in 2007/2008, again briefly in 2014 -- which is when I believe I picked this module trick up -- and then didn't touch it again until March of this year). It's made an impression on my team and we're all having a good time developing code this way. I do wonder, though, what other shortcomings might exist with this approach.
There's no magic, only layers. `python -m pdb <args>` runs pdb with the rest of the arguments. pdb handles the second `-m`.
If you have a fancy IDE feature, open a new python file, type "import pdb", use go to definition on pdb to jump to that file in the standard library, and read its main function - it handles -m explicitly :)
Speaking of pdb.. maybe someone knows why pdb has some issues with scope in its REPL that are resolved in VSCode's debugger and PyCharm?
Multiline statements are not accepted, nor things like if/for
Even list comprehensions and lambda expressions have trouble loading local variables defined via the REPL
Are there workarounds? It would reduce the need for using IDEs. People who have experience with Julia and Matlab are very used to a trial and error programming style in a console and bare python does not address this need
Not supporting multi-line statements is just because pdb doesn't bother to parse the statement to work out if it is an incomplete multi-line statement. That could be easily fixed (I have a prototype patch for that using `code.compile_command`).
"The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods - this includes comprehensions and generator expressions since they are implemented using a function scope."
This is a fundamental limitation of `exec`. You can workaround it by only passing a single namespace dictionary to exec instead of passing separate globals and locals, which is what pdb's interact command does[2], but then it's ambiguous how to map changes to that dictionary back to the separate globals and locals dictionaries (pdb's interact command just discards any changes you make to the namespace). This too could be solved, but requires either brittle ast parsing or probably a PEP to add new functionality to exec. I'll file a bug against Python soon.
After triggering pdb by having "breakpoint()" in tour python code and dropping in the debugger you can type "interactive" in the console to enter multiline strings.
-m pdb is post mortem debugging, it drops you in the debugger when it encounters the first unhandled exception. This is much easier than trying to pin point where to put a breakpoint.
I've always been curious how that mechanism works exactly what is it about that invocation technique that satisfies the relative imports? I think it changes the pythonpath somehow right in a way related to the module being ran, something like appending the basedir where the module is saved to the PYTHONPATH?
Python 3.12 will include a SQLite CLI/REPL in the standard library too[0][1]. This is useful because most operating systems have sqlite3 and python3, but are missing the SQLite CLI.
slightly related, emacs also includes an sqlite client / view now.. I find it funny to see everybody chasing the same need, and unsurprising since.. it's always good to have sqlite close to you.
Lots of code in the stdlib does not have type annotations. Though I think most popular modules in the std either are annotated or have stubs somewhere.
I know, but I expected that to be restricted only to older modules. I don't expect them to annotate all existing modules.
I don't see why they would introduce _new_ code without annotating it, when that's clearly the trend for 3rd party libraries. From a quick look, it doesn't seem like it would be difficult to type either.
The standard library has type hints in the "typeshed" github repo. Please do not submit PRs to cpython to add type hints (I made this error before too:))
Not sure why my comment is being downvoted. From the sqlite.org website:
The SQLite project provides a simple command-line program named sqlite3 (or sqlite3.exe on Windows)
that allows the user to manually enter and execute SQL statements against an SQLite database
or against a ZIP archive.
I use http.server all the time, particularly as modern browsers disable a bunch of functionality if you open file URLs. Had no idea there was so much other stuff here!
Wait, how does that work? As far as I can see from the documentation, it can only serve on localhost, which to my understanding is only accessible from the single device it was launched on.
If you serve on localhost you can usually access from other devices by using the "servers" ip address. So if your desktop where you're running the server has ip 192.168.1.10 then you can go to http://192.168.1.10 in the browser of another device on the same network.
But `localhost` is also an alias specifically for the loopback address (typically `127.0.0.1`), so "serve on localhost" can reasonably be interpreted as "serve on 127.0.0.1" which will only be available to other programs on that host, and not to others devices on the local network.
Also if your host and client devices both support mDNS / Bonjour, you don’t even need to type the IP address.
For example if your Ubuntu Linux desktop machine has host name foobar, and you run a http server on for example port 8000 then you can use your iPhone and other mDNS / Bonjour capable things to open
And likewise say you have a macOS laptop with hostname “amogus” and for example http server is listening on port 8081, you can navigate on other mDNS / Bonjour capable devices and machines to
Have you checked the full docs? Maybe it takes an optional parameter to specify the server machine's IP address or host name. Then others on the network could see it.
It may be a fantastic, well loved language that's exploding in popularity and the source of endless very high quality CLI tools... but. The absolute cheek!
I find the entire premise of the post to be pretty baffling.
> Seth pointed out this is useful if you are on Windows and don't have the gzip utility installed.
Okay, so instead of installing gzip (or just using the decompressors that aren't the official gzip utility but that do support the format and already ship with Windows by default[1]), you install Python...?
Even if the premise weren't muddy from the start, there is a language runtime and ubiquitous cross-platform API already available on all major desktops that has a really good, industrial strength sandbox/containerization strategy that more than adequately mitigates the security issues raised here so you can e.g. rot13 without fear all day to your delight: the browser.
Sorry you're right. But you are being pedantic. These are quick hacks that might come in handy a couple times per year, maybe. Bringing up security, alternative native tools, and even trying to find a formal definition for "most people" is, imo, missing the point.
iirc this is one of the things earmarked for a hypothetical Python 4, making -P the default. It's also one of the many relatively well-known (security) issues in Python that don't get addressed for a surprising amount of time. Others in the same vein would be stuff like stderr being block-buffered when not using a TTY, no randomized hashes for the longest time, loading DLLs preferably from the working directory, std* using 7-bit ASCII in a number of circumstances and many more.
As opposed to line buffered, I assume? That sounds annoying, but why is it a security problem?
> no randomized hashes
I'm not up to date, but I think last I looked, I had the impression that randomized hashes didn't seem like they would fundamentally prevent collision attacks, just require more sophistication. Is that not the case?
No, the problem was that python[.exe] would load pythonXY.dll etc. from the working directory instead of the installation.
Edit: I also recall issues with wheels where .so's in unexpected locations would take preference over the .so files shipped by the wheel. I believe most of that should be fixed nowadays with auditwheel and hardcoded rpaths.
Doing that requires palcement of files right in the directory where the user is likely to run that module.
Seems to be a quite rare vector for exploitation.
Sure, on a multiuser system I might trick some other user into running such a command in /tmp and prepare that directory accordingly, but other vectors seem more esoteric.
Even in Google's own repos. Starting any of those (no matter where they are stored) in a hostile repo would let the code in the repo take over the machine.
If running with -m, or -c and an import of package matching name of malicious package. Doesn't happen when running your own script (located in another directory) that imports that package, even if you are running it in that directory.
I often use `python -m http.server` e.g. to easily share files over local networks but I had no idea so many standard modules supported this. Thanks for sharing this link!
One of these days it would be nice to make an unofficial Python reference book which documents these tools, hidden features (like re.Scanner!), and other corners of the stdlib or language.
An extra tool, but you don't really have to learn it, and if you're "always dumping JSON" and also use the command line a lot, you probably want to have it around anyway.
The current version and for quite some time, then.
If I said 'Windows comes with the Edge browser' would you say 'I use two at work, one does but the other only has Internet Explorer'? Surely it's generally implied we're talking about things as they are, unless specified otherwise?
Shame gzip has one but zlib does not, that would be a very useful addition: some software create raw zlib streams on disk (e.g. git) and there’s no standard decompressor, you need to either prepend a fake gzip header, go through openssl, qpdf‘s zlib-flate, or pigz -z.
After looking into it, turns out gzip is a python module while zlib is a native (C) module. And I can find no hook to support `-m` with native modules.
> I thought this might provide a utility for generating random numbers, but sadly it's just a benchmarking suite with no additional command-line options:
I had the same experience with the tkinter ones - I thought they might be like zenity, a way to build simple UI elements from the command line. But they mostly just show simple non-configurable test widgets. The colour chooser could be helpful though.
If by on a whim, you mean after removal after 6 years of being warned against (the first "maybe use setuptools instead" note was in Python 2.7.12 in 2016), deprecation was proposed in october 2020, agreed in january 2021, and removal will happen in Python 3.12, which... hasn't been released yet.
You give it a pattern for each token type, and a function to be called on each match, and you get back a list of processed tokens.
Importantly, it processes the list in one pass and ensures the matches are contiguous, where a naive `re.findall` with capture groups will ignore unmatched characters. You also get a reference to the running scanner, so you can record the location of the match for reporting errors.
[0]: https://stackoverflow.com/a/693818/252218[1]: https://en.wikipedia.org/wiki/Lexical_analysis#Tokenization