Talk:Heronian triangles: Difference between revisions

Content added Content deleted
(Perhaps worth dropping the 'product' import from itertools in the Python version ?)
(Dropping the product function and the a <= b <= c test (letting the generator component of the comprehension do that work))
Line 8: Line 8:
::::Thanks – that was fast.
::::Thanks – that was fast.
::::On the topic of imports, I wonder if it might make good pedagogic (and perhaps engineering) sense to drop the import of ''product'' from ''itertools'', and let the list comprehension do the generation of cartesian products ?
::::On the topic of imports, I wonder if it might make good pedagogic (and perhaps engineering) sense to drop the import of ''product'' from ''itertools'', and let the list comprehension do the generation of cartesian products ?
::::The fact that list monads and list comprehensions yield cartesian products unassisted is one of their most interesting (and arguably central) properties, and perhaps we can demonstrate that more clearly by leaving the condition as it is, while rewriting the first (generating) half of that comprehension as '''h = [(a, b, c) for a in range(1, last) for b in range(a, last) for c in range(b, last)'''
::::The fact that list monads and list comprehensions yield cartesian products unassisted is one of their most interesting (and arguably central) properties, and perhaps we can demonstrate that more clearly by rewriting the first (generating) half of that comprehension as '''h = [(a, b, c) for a in range(1, last) for b in range(a, last) for c in range(b, last)'''
:::: (where ''last'' is maxside+1)
:::: (where ''last'' is maxside+1)
::::'''Advantages''':
::::'''Advantages''':
::::# The filtering happens earlier. Rather than first generating 8 million potential tuples and only then starting to filter, we immediately begin to filter in the inner ''for'' loop (or inner call to ''concat map'') of the process which is generating the cartesian product, and we never create the full oversized set in the first place.
::::# The filtering happens earlier. Rather than first generating 8 million potential tuples and only then starting to filter, we immediately begin to filter in the inner ''for'' loop (or inner call to ''concat map'') of the process which is generating the cartesian product, and we never create the full oversized set in the first place.
::::# More trivially, by defining b and c in terms of a and b, we can also skip the consideration of ''(200 cubed) - (200*199*198) = 119600'' redundant duplicates.
::::# By defining b and c in terms of a and b, we immediately eliminate the 6646600 out of 8000000 cases which otherwise have to be filtered out by the ''if (a <= b <= c)'' condition, and that condition can now be dropped.
::::# Apart from a probable space improvement, there seems (as it happens) to be a time improvement in the range of 50% (at least on this system with Python 2.7).
::::# Apart from the probable space improvement, there seems (as it happens) to be a time improvement in the range of 50% (at least on this system with Python 2.7).
::::[[User:Hout|Hout]] ([[User talk:Hout|talk]]) 10:11, 25 October 2015 (UTC)
::::[[User:Hout|Hout]] ([[User talk:Hout|talk]]) 10:11, 25 October 2015 (UTC)