Rdm
Joined 24 August 2022
A note on Special:RecentChanges
(Dumping this here until I can think of a better place for it, possibly on a different site...) |
(A note on Special:RecentChanges) |
||
(6 intermediate revisions by the same user not shown) | |||
Line 2:
<hr />
== Recent Changes ==
The reports on tasks not implemented in each language no longer function, after the miraheze migration. A crude workaround is to frequently poll the RecentChanges page and look there for task pages which have not been implemented in a language. But the [[Special:RecentChanges]] link from the site nave on the left has also gone missing. You can still find it with an extra hop into [[Special:SpecialPages]].
== big O ==
Line 15 ⟶ 19:
In the context of cache management and a relatively modern cpu design, the likelihood of a cache miss and the cost of a cache miss depends on the size of the data set.
32 K 4
Line 34 ⟶ 38:
1024 M 44 + 57 ns 1 + ns</pre>
Here, a "cycle" is a clock cycle, and depending on type of instructions being used, a cpu core may execute 1 instruction per cycle, 2 instructions per cycle, 4 instructions per cycle or even (in carefully limited contexts) 8 instructions per cycle. (A 3.5GHz clock would have a 0.286ns clock cycle.)
In other words, when working with a gigabyte of memory on that machine, a single instruction with a cache miss might cost the time of almost 1100 instructions to (in extreme cases) almost 8800 instructions.
Of course, the above simplifications focused purely on a single cpu architecture
And there are other issues -- for example, when two different cores are accessing the same memory, that tends to introduce cache management conflicts which slow things down.
Line 42 ⟶ 48:
And then there's OS issues.
To simplify this further, though: code which accesses memory sequentially tends to have an order of magnitude speed advantage over code which accesses memory randomly. A rule of thumb here, <
Meanwhile, the stuff we do here on rosettacode is pretty much just "dipping our toes in the water". Because of our focus on problems which can be solved by a variety of programmers in a variety of languages, we tend to favor problems which shy away from these performance extremes.
|