Talk:Rosetta Code/Count examples

From Rosetta Code

method of counting header template use may be incorrect

The method of counting header template uses may still give an incorrect count of programming examples, but I think it's as close as we can get without getting unreasonably complex. For instance, we could count the number of start lang tags, but some examples use pre and some more still have one example split into sections with explanations. The method used for this task does not account for iterative and recursive solutions for one task, or splits like on the String Length page. What we end up with for the total across all tasks is the same as counting the members in each language category and subtracting the language implementations (which seems more complicated). --Mwn3d 19:30, 9 February 2009 (UTC)

Count ===(.*)===

We can count anything under "triple equals" but it will duplicate results. Like:

1. Python
 1.1. A
 1.2. B

Will count 3, not 2.

Other solution is count only "triple equals" in articles with them. But some tasks like HTTP Request have only one multiple solution tasks, That will count only Erlang.

A complex solution, will be read all sections, but will take a long time, and will be buggy.

Traceback for problem with Python version

Traceback (most recent call last):
  File "", line 8, in <module>
    y = urllib.urlopen("" % t)
  File "d:\Python26\lib\", line 87, in urlopen
  File "d:\Python26\lib\", line 178, in open
    fullurl = unwrap(toBytes(fullurl))
  File "d:\Python26\lib\", line 1028, in toBytes
    " contains non-ASCII characters")
UnicodeError: URL u'\u2013Clark_subdivision_s
urface&action=raw' contains non-ASCII characters

cmlimit may not be over 500 for users

Attempts to read more than 500 tasks using the XML query fail with this error. Everybody seems to have used this method (rather than downloading the HTML page for example) so presumably it's not considered to be incorrect. RichardRussell 12:29, 17 November 2012 (UTC)

Nevertheless I have modified the BBC BASIC solution to read the full set of tasks. RichardRussell 12:14, 21 November 2012 (UTC)

comparing programming language entries

I think it would be a good idea to show when a program is executed so that outputs could be compared (somewhat) to other entries, even though it may not be the exact time-frame.     -- Gerard Schildberger (talk) 01:26, 21 January 2019 (UTC)

I don't disagree, but that ship sailed nearly 10 years ago. Some of the examples are wildly inaccurate, (Ring and Sidef are two obvious examples at this point,) but since there is no metric to test against, it is difficult to say whether a particular example is accurate or not. I spent quite a bit of effort making the Perl 6 example as accurate as possible. If you check a example count on my list, and check the example count on the actual page, they match. At least until new task examples get added. --Thundergnat (talk) 23:07, 21 January 2019 (UTC)

latest changes making output not viewable on two browsers

With the latest changes, some of the output isn't viewable on the FireFox and/or Internet Explorer versions that I'm using.     -- Gerard Schildberger (talk) 23:10, 21 January 2019 (UTC)

Sorry about that, entirely my fault. Thanks to SqrtNegInf for fixing it before it dragged on too long. --Thundergnat (talk) 02:29, 22 January 2019 (UTC)

Show which languages don't implement a task

I think the output of this task is really useful for browsing the site - it's what I use most. It's an easy way to find popular tasks. The linked output shows which of the top ten languages implement each task which is nice but not too useful since the top ten implement most of the tasks. More interesting would be to show the top 10 languages that did not implement the task. Garbanzo (talk) 19:34, 29 December 2020 (UTC)

I'm going to make the assumption that you are talking about the Raku implementation output since, to the best of my knowledge, that is the only one that has any information about which languages have examples for a particular task. I don't disagree, it would be nice to show information for more languages. (Ideally all.) This was just the format that I found most useful when I wrote it; especially since Raku was already in the "top 10" at the time. I had to use some cut-off for how much I could jam in a single table and 10 was a nice round number. I could look into increasing the cut-off threshold to 15 maybe but it is already a pretty heavy page... and then whichever language is at 16 would be left out.
Theoretically it would be possible to do something like what you suggest but I'm not sure how useful it would actually be. It would be quite challenging to have a unique identifier for the possible 40-50 languages which have the most implementations for many task but not for this particular one; and it would pretty much negate the ability to sort by language since the sorting is all column based. A column would now be one of many possible languages.
I'll fiddle around with it to see if I can come up with something useful, but I wouldn't hold my breath. If there was some way to hook into the database directly so I didn't need to do so many page requests it would be more feasible. --Thundergnat (talk) 22:07, 29 December 2020 (UTC)
Well, it's not great but at least there is significantly more coverage. Now generating reports for the top 40 language in groups of 10. See Top tier, Second tier, Third tier, Fourth tier. The updated code as it stands can do up to 50 languages, and could easily be modified to do more just by increasing the @places identifier array. I don't much see the point right now though. The 40th language has less than 45% task completion coverage so just picking a task at random from the Tasks page will likely be one not implemented. --Thundergnat (talk) 17:23, 31 December 2020 (UTC)
That's a big improvement, should be very useful, thanks. --Tigerofdarkness (talk) 20:12, 31 December 2020 (UTC)
Agreed. I think it would be useful to link to these from the front page Garbanzo (talk) 08:10, 3 January 2021 (UTC)
Again, this output is very useful and would be nice if it was visible from the front page. One thing it is missing is joint entries. Several tasks have common C/C++ entries (e.g Include_a_file#C_.2F_C.2B.2B) but the entry is only counted for C. Those entries are counted from the C++ Catagory page. Most tasks should (and do) have separate entries but a few of the simple ones would have identical entries. Garbanzo (talk) 01:09, 12 July 2021 (UTC)
One more thing that would nice (but maybe a lot of work) would be to distinguish between unimplemented tasks and tasks that have been omitted Garbanzo (talk) 01:12, 12 July 2021 (UTC)
Sounds good. May be interesting / challenging to get that right. Knock yourself out. Let us know how your work progresses. --Thundergnat (talk) 10:17, 13 July 2021 (UTC)
I think I got it working and it was interesting. It turned out figuring out omitted was easier that splitting C/C++ which was opposite of what I was expecting. There is a test run here User:Garbanzo/TaskCountOutput. I used ASCII 'O' to mark the omitted tasks, but may another symbol would look better? This is my first dive into Raku so it may not be the best idomatic way. It's a fun language. Garbanzo (talk) 17:05, 17 July 2021 (UTC)
Ha! Kudos! I was only semi serious when I suggested you implement it. I've got a bunch of RL stuff going on right now that is severely limiting my Rosettacode time. Awesome that you stepped up and did it. (And I got you to mess around with Raku too. Mwaa ha ha ha. My evil plan comes together. 😈) O is probably as good an indicator as anything else. I'll try running my weekly reports with the update. --Thundergnat (talk) 19:35, 18 July 2021 (UTC)

Problem running the C# entry

I tried running the C# entry under .NET Framework 4.8.1 and Windows 10 and got the error message: Unhandled Exception: System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.
Posts on Stackoverflow suggested adding ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12; as the first thing in the Main method, which fixed the problem, however I haven't put this in the actual source as I'm not sure it is the "proper" solution. --Tigerofdarkness (talk) 11:17, 11 September 2022 (UTC)

Encoding of '+' in task names such as 'A+B'

I noticed today whilst engaged in the tedious business of updating all Wren solutions to use Wren syntax highlighting that encoding the '+' sign to %2B in a task name such as 'A+B' no longer works - at least in Firefox and Chrome. It encodes it as 'A_B' instead. The only way I could get it to work was to use the double encoding %252B. Most other solutions are probably using a built-in method for this but may be worth checking that they still work. PureFox (talk) 11:40, 3 February 2024 (UTC)