[OpenSCAD] Can you give me any rendering advice...?
Dan Zuras 3D
threedee at nonabelian.com
Sun Dec 26 18:50:41 CET 2010
> Subject: Re: [OpenSCAD] Can you give me any rendering advice...?
> From: Marius Kintel <marius at kintel.net>
> Date: Sun, 26 Dec 2010 18:08:17 +0100
> Cc: openscad at rocklinux.org
> To: Dan Zuras 3D <threedee at nonabelian.com>
> On Dec 26, 2010, at 17:05 PM, Dan Zuras 3D wrote:
> > Thank you for looking into the problem.
> > How dramatically?
> Very dramatically. Think everything freezes for minutes while rendering, =
> without progress output.
This is consistent with my observations
of the problem. Perhaps one can profile
the CGAL behavior & see where it spends
an inordinate amount of time when this
crops up. Just a suggestion.
> >> measure where we don't render this preview if the number of objects ==
> >> after CSG normalization exceeds 1000. This could be made configurable =
> in =
> > Will this mitigate the problem or merely
> > cause OpenSCAD to crap out when it happens?
> It would cause a freeze while rendering. This freeze could be remedied =
> by rendering off-screen while keeping a progress bar around.
Doesn't sound like much of an improvement.
Does it have some diagnostic value perhaps?
> >> Also, the CGAL library used for geometric evaluation (F6) is =
> extremely =
> >> slow. This is a long-standing issue which I'm hoping to be able to =
> > I believe I (& others) have run into variations
> > of this issue before. While I also have no idea
> > how it happens I am well aware that it does.
> As you correctly pointed out, this happens because arbitrary precision =
> floats are being used, which is just over the top computationally =
Also over the top memory expensive.
> This is combined with CGAL wanting to have a so-called 3D Nef polyhedron =
> around for CSG operations to be available, which means we have to =
> evaluate everything to a single resulting polyhedron, which ends up =
> gobbling up even more CPU time as well as memory.
> As you also pointed out, there might very well be memory leaks in there =
> somewhere, or rather temporary accumulation of memory in some evaluation =
> loop. Combined with the performance issues, these things are a bit hard =
> to catch and haven't been a priority. Again, the latest refactoring job =
> will make it possible to run a lot more of this functionality in =
> isolation, which gives some hope.
I have a possible clue for this as well.
I run on Ubuntu 10.04 on a 2 CPU system
with 4GB of memory. And I routinely run
the system monitor when I render things
to keep track of CPU & memory usage.
When the problem crops up & it dies or
I must kill the job & try something else,
I notice that a significant amount of
the leaked memory is left over. That
is, Ubuntu has lost track of it as well.
I can recover it with a reboot but it
suggests that at least part of the
problem lies within the kernal & how it
manages funny malloc & realloc calls.
So, in this case at least, the problem
may lie as much with the kernel as with
the rendering code that caused it.
BTW, I believe that the nature of the
ill treated memory calls are pointers
that are bad enough to cause writes on
the page link data. This would cause
both CGAL & the kernel to drop those
pages from their free lists. Bad all
> CGAL is, in my experience, not stable when using normal or double =
> precision floating point numbers, so we'd have to look elsewhere for CSG =
I believe this instability may be due to
some ill formed interpretation of the
issue of when two points are the same
point. As we are rendering images in
fields with merely thousands of pixels
in them, I think a selectable epsilon
that determines when 2 points are to be
considered coincident would fix the
That is, one might have a (user changable)
parameter such that if |x - y| < epsilon
then x & y are to be considered as rendered
to the same point.
One need not change the values of x & y
to some common value like (x + y)/2. But
just regard all points within that epsilon
as being rendered to the same point.
If one were to set the value of epsilon
to something on the order of a pixel size
many of the traditional rastering effects
would start to happen. But if one were to
set it to something on the order of 1/100th
or 1/1000the of a pixel, I suspect everything
would be OK.
And at ~1000 pixels on a screen & ~1/1000th
of a pixel for the epsilon, something like
~20 bits are needed. This is far exceeded
by the 53 bits of precision that a double
precision number carries. Even in the most
extraordinary chains of roundoff error, two
originally identical points will still be
within such an epsilon when rendered. And
two points distinct enough to see will be
easily outside that epsilon.
It has been years since I wrote any graphics
code of my own & this approach may be naive
by modern standards but it seems a simple
enough cure to me.
If, perhaps, greatly different from what is
currently being done.
> Unfortunately, in the world of geometric evaluation of CSG in 3D, this =
> appears to be a common issue, and I'd appreciate pointers to and =
> experience with other libraries which could help improving this. As an =
> intermediate resort, I've been thinking about using or implementing a =
> faster algorithm, and rather fall back to CGAL whenever that one fails. =
> It would naturally have to fail in a predictable and detectable fashion.
An algorithm that makes far less use of its
memory resources would automatically get
faster even if somewhat more expensive
It is a reasonable place to start.
More information about the OpenSCAD