Perhaps a solution would be if the verilog cpu project provided a few specific model numbers for each part (obviously, choosing ones that are commonly available and lesser cost), so that public contributors can work towards a common implementation for said part.
I mean, the issue is that you're not going to find the specifications for the PHY stuff anywhere. There's no model numbers to clone. It's literally things like how do you etch out capacitors in the fab's process? What's the electrical properties of their different dopants? There's not really any models to clone without cloning the whole fab.
Our best option IMO is to wait until Moore's law hits more of a standstill, when fabs become more of a commodity and they're less secretive about the underlying process rules.
We're not talking about achieving on-die chip speeds at the picosecond range here. We're talking about interface specifications between the CPU and the DRAM, USB, etc -- these things can be designed for based on specs, and you don't need a trise/tfall equal to 10% of your FO4 inverter delay to achieve this. These are board level specifications which can be achieved with medium tier off the shelf components. And if they can't be achieved at the latest DDRX specs, then just design for one generation behind to at least get something going in the community.
IEEE and IEEE Explore have been actively going after professors and forcing them to take down links to their own papers. In the past, many professors hosted pdf's on their websites for their papers -- this has only stopped because IEEE has been forcing them off. Just wanted to add this for those who think professors have 'suddenly become stingy' -- no, they aren't - they were forced to.
Interestingly, it's gone the other way in mathematics. It used to be that lots of journals required authors to sign some sort of statement saying that they wouldn't post PDFs on their web pages. (I remember one professor who had a page essentially saying "Here's the PDFs. Sue me, Elsevier!" Unfortunately, I don't remember who it was) Eventually, I think in response to the massive, overwhelming popularity of the arXiv, which I think most mathematicians would choose over journals if forced to choose, the big publishers decided, slowly, piecemeal, and behind the times as always [0], to go with the times and stop fighting something that was going to happen anyway.
[0] With an exception for the AMS, which has always (as long as I have cared, and checked, anyway) had publication policies that are as author- and reader-friendly as one can probably expect in the real world.
Really? I haven't seen this happening yet. I still self-host copies of my publications. I'm not sure how I (or others in the same position) would respond to a request to remove them from IEEE.
Academia is probably keeping quiet since most of EE, CS, CE is heavily dependent on IEEE publications for reputation and career advancement and paper-reputation.
There would be complaints somewhere you can link to. I have a collection of over 15,000 papers. Many I got off of IEEE/ACM. However, many others I got by typing the name into Google and/or CiteSeerx to find same file. I still can usually get any file I think of and many are still on academics sites. Those are high-profile, too.
So, where's your evidence that IEEE can or is forcing PDF's off the net?
Totally agree. I have gone through academia and am now in industry; the other day I wanted to read and browse basic papers in IEEE Explore for basic knowledge on new topics. Since I am no longer a student, I am willing to pay money - as much as hundreds to get unlimited access. Instead, for hundreds, IEEE explore offers a measly ("generous") 25 downloads a month or some such non-sense It just seemed so miserly of them. We researchers had formerly given them pretty much free research to publish, and they turn their back on even legitimate paid avenues for us to get the papers. We are willing to pay; but where is the option?
Anyone that has gone through research and academia knows that you often need to browse many papers to even find the ones worth reading. 25 paper cap includes browsing PDFs; you will easily hit that cap in a single day simply through browsing.
IEEE and IEEE Explore can go something themselves. This is coming from a published IEEE author and academic researcher.
"Anyone that has gone through research and academia knows that you often need to browse many papers to even find the ones worth reading."
Exactly. And without the monthly subscription plan (something I wasn't even aware of until this comment thread), the going rate seems to be around $30 per paper--rent-seeking to the point of highway robbery. One thing that these publishing companies could have done long ago was allow some sort of "free preview" of the full text or "full refund within 10 min" option to help deal with this problem. I have no idea how this would be done technically to prevent "pirating" but, as it is, the pay-per-paper system is completely disconnected from the way researchers browse papers.
Have you taken a look at deepdyve.com? $40/month ($30/month if bought as an annual subscription) for unlimited online access to a ton of journals, including a bazillion from IEEE: https://www.deepdyve.com/browse/publishers/ieee
If you create an account but do not subscribe, you can view all the papers but only for 5 minutes each. You can then subscribe to read the ones you need more of, or purchase access to 5 papers for $20.
The above is for online reading. If you need PDFs, you can purchase those but they are not cheap. They are usually what the publisher charges on the publisher's site less a 20% discount.
After the IEEE $25 monthly service came out, I subscribed for three years until finally canceled it. Honestly most of the papers I downloaded were junks, the only reason I download them was to verify the available references.
The paper quality declination plus publication quantity inflation are the main reasons in this "Sci-Hub" crisis.
This sounds like an increasingly commented on issue: there are now hundreds and hundreds of specialty journals, but quality is patchy. In fact, I think we may be seeing inherent problems coming to the fore now that journals are more focussed on profits: often there is publication bias towards certain topics; in other cases the bias is towards positive results and breakthroughs, with a lack of enthusiasm for publishing retractions or establishing reproducibility.
One thing that always sticks in my craw is that the raw data is very rarely published for analysis. Many may disagree, but I think that by not publishing the raw data for experiments the temptation to commit academic fraud is very high.
I often wonder about some of the more recent scandals whether it might have been picked up a lot faster had the data been more readily available.
I also have noticed that many journals don't say who the reviewers are after publication. For instance, I was looking at body pyschotherapy the other day and came across something called biodynamic analysis, which appears to be a widely held and well respected view within the subdiscipline. I was amazed to discover that something called "grounding" was a serious concept that underpins this analysis, so I looked at the Wikipedia references and discovered that there was at least one journal citation to The Journal of Alternative and Complementary Medicine. The article seems to be attempting to make a link between bloody viscosity and electrical grounding of humans to the earth! [1]
Now there is another article that shows there is virtually no impact on the body from the same journal, so I started to wonder how this passed peer review. The answer is - I have no way of knowing as they don't make it clear what their review policies and procedures are, and they appear to charge authors to publish work.
In other words - it's pseudoscience dressed up in credibility. And it is making a serious impact in the world of psychology!
I have found recently that reviewers in natural language processing, in aggregate, place negative value on reproducibility.
Perhaps 1 out of 3 reviewers gets it, another is bored by the fiddly and detailed methods section, and the third was subconsciously hoping for a "brilliant", mystifying secret sauce. Oh, that's all you did? he asks. I could have done that.
Of course you could have done it, I just told you how and pointed you to the data. I also compiled that data, and you won't even let me tell you that because of blind review.
I'm an occultist. One of the first regimens I did so to learn occultism was grounding and centering. It's very much an esoteric skill, and not something to be cited in a paper... unless it was being discussed under a 'microscope'.
I would enjoy in using fMRI or other diagnostic tools to see what physiologically happens when I do those things. But I have no qualms; that shouldn't be in any academic paper until my recommendation of measuring it is done.
> I would enjoy in using fMRI or other diagnostic tools to see what physiologically happens when I do those things. But I have no qualms; that shouldn't be in any academic paper until my recommendation of measuring it is done.
Oh, so researching an interesting phenomenon, that is already cited in journals is somehow "not science"? I'd be careful about letting your own biases affect you negatively.
Simply put, there may or may be nothing there. With diagnostic methods and feedback from the person, we can start to determine if there is a measurable effect. If there's something there, we research further. If not, we cite it as proof "no noticeable effect". This also goes to show that we (academic community) should be much more accepting of papers showing "No Effect", rather than only positive papers. Knowing the landmines that others went down is just as valuable as what works.
But in actuality, I was also giving on-topic discussion about where those phenomena are discussed at length: in studies on occultism. That's just a factual statement with no value proposition. Whomever is more interested can do their own research, with this topic in mind.
What did you mean by "that shouldn't be in any academic paper until my recommendation of measuring it is done"? I think that might be the sticking point.
I was using google speaking keyboard. I was at a stoplight and spoke it and submitted.
I'm all for scientific method, be it showing positive, negative, or no results. I also know what isn't currently scientific, although I do have curiosity if some of those 'things' can indeed be proved.
This is not how science works, that is just paying attention to results that back your hypothesis or biases; in fact, doing this is not science at all.
EDIT: An example of this in the wild is how drug companies would cherry pick research to back the outcome the wanted; cherry picking is an anti-pattern of true science.
Happens to the best of us. My editing issues is Apple's autocorrect, I get some truly strange results sometimes.
That makes sense - though I hope you don't mind me asking but when you say "I also know what isn't currently scientific", do you mean untested hypotheses?
Hypothesizing then observing or reviewing past observations to verify the hypothesis is a form of experimentation. Every revelation of astronomy has been the result of experiments.
I happen to think occultists are wrong (as in: factually incorrect in their beliefs). But they are entitled to think whatever they want, so that's not a problem. What is a problem is objecting to data being collected and published.
But it seems that his speech-to-text translator wasn't accurate and that's not what he meant.
If you live in a city with a University, like Toronto, check out getting a library card to their library system. It costs me $300 / year and it's unlimited research papers. The only downside is that it is retrievable in person only, so you kinda need to be either close to the University or keep a list of papers you need before you make the trek out.
You are paying $300/year for access to information that was given to the world freely, and sacrificing your time and effort to get it? Do you see the problem here?
And also all the other resources the university library offers. Especially in summer months, uni libraries are amazing and massively under-used resources. Working/learning is a public library is usually a drag, but uni libraries are awesome.
I used to live a couple blocks from a uni library, and I practically lived there. In the summers I drove straight there from work, got lost in the stacks learning about random things, and only left when it closed.
Fortunately that library was completely free to the public, but if it weren't I would have happily paid $100/mo during the summer months for access (Probably saved that much just in utilities anyways.)
I recognized what a scam IEEE was my first year of grad school. So many fees, sub-society fees, transaction fees...something themselves indeed. Many times over.
haha... self timed domino .. published like 10-15 years ago already.
this won't work for many reasons that everyone in the industry already knows. High fan-out back to many gates that are spatially distant means that you need not only complex routing, but also complex sizing of the feedback inverter. So.. this won't even fit in with automatic place and route of complex cells. And good luck doing this custom in an efficient way such as making it a semi standard cell library.
Furthermore, the feedback inverter only works in simple cases such as the cascaded gates you show, when the switching of the one gate depends only on one fan-out. If it depends on more than one fan-out, the feedback inverter essentially requires another logic gate in front of it, which simply just blew up your entire circuit size and any power savings you were attempting.