» Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next
My attempt would look something like this:

Code:
``` indices=np.arange(3) oboe = (indices+1)%3 # First get surface normal and edge lengths! :) # e = tv[oboe] - tv[indices] # <- cool way to do it e = array([tv[oboe[i]] - tv[i] for i in indices]) # <- boring way #L1=numpy.linalg.norm(e1) #L2=numpy.linalg.norm(e2) #L3=numpy.linalg.norm(e3) L = np.apply_along_axis(np.linalg.norm,-1,e) snorm = numpy.cross(e[1],e[0]) snorm = snorm/numpy.linalg.norm(snorm) Linc = 1 /L Lspan = np.ceil(L) Lspaces = [np.linspace(0,1,1+Lspan[i]) for i in indices] bary_coords = np.zeros(3) for i,j in numpy.vstack((indices,oboe)).transpose():     for ca in Lspaces[i]:         for cb in Lspace[j]:             if ca+cb > 1:                 break             bary_coords.fill(1 - ca - cb)             bary_coords[i] = ca             bary_coords[j] = cb             # here we will need to have calculated matrices             # to take us from barycentric coords to 3-space             # Cast ray must take two args, src and dir             # and moves from src to src+dir             castraythroughvoxels(make_cartesian_coords(bary_coords),snorm) ```

if we're concerned about coverage, we can change it thusly:

Code:
``` indices=np.arange(3) oboe = (indices+1)%3 # First get surface normal and edge lengths! :) # e = tv[oboe] - tv[indices] # <- cool way to do it e = array([tv[oboe[i]] - tv[i] for i in indices]) # <- boring way #L1=numpy.linalg.norm(e1) #L2=numpy.linalg.norm(e2) #L3=numpy.linalg.norm(e3) L = np.apply_along_axis(np.linalg.norm,-1,e) snorm = numpy.cross(e[1],e[0]) snorm = snorm/numpy.linalg.norm(snorm) Linc = 1 /L Lspan = np.ceil(L) Lspaces = [np.linspace(0,1,1+Lspan[i]) for i in indices] bary_coords = np.zeros(3) for i,j in numpy.vstack((indices,oboe)).transpose():     for ca in Lspaces[i]:         for cb in Lspace[j]:             if ca+cb > 1:                 break             bary_coords.fill(1 - ca - cb)             bary_coords[i] = ca             bary_coords[j] = cb             # here we will need to have calculated matrices             # to take us from barycentric coords to 3-space             # Cast ray must take two args, src and dir             # and moves from src to src+dir             castraythroughvoxels(make_cartesian_coords(bary_coords),snorm)     for ca in Lspaces[j][len(Lspaces[j])/2:]:         for cb in Lspace[i]:             if ca+cb > 1:                 break             bary_coords.fill(1 - ca - cb)             bary_coords[j] = ca             bary_coords[i] = cb             # here we will need to have calculated matrices             # to take us from barycentric coords to 3-space             # Cast ray must take two args, src and dir             # and moves from src to src+dir             castraythroughvoxels(make_cartesian_coords(bary_coords),snorm) ```

But I don't have a good enough intuition for barycentric coordinates to know which of those we prefer without drawing it out better.
It seemed to me (intuitively) that if we iterate over each edge paired with each other edge, drawing perpendicular rays from each edge, then finding the intersection of the two rays, we would cover every point in the triangle (voxel-wise) at least once. Of course, many of those points would lie outside the triangle, but the barycentric coordinates can easily tell us that.

Edit: I'm inclined to try using the first set of code and seeing what coverage looks like for the fine details of moderately complex models.
This should do the same thing without explicitly having to cast the rays.

Let me know what coverage looks like on a decent model
I'm implementing the code. self.geom_makecart() is simply return tri[0]*bary[0]+tri[1]*bary[1]+tri[2]*bary[2], while for def castraythroughvoxels(self,origin,direction,radius=1) I'm using the 1987 paper referenced here. The code dies for a certain triangle with ./kmz2mc.py:79: RuntimeWarning: invalid value encountered in divide: snorm = snorm/np.linalg.norm(snorm); at that point, snorm is [ 0. -0. 0.]. That means that e[1] and e[0] are parallel or antiparallel, right?
Yes. We should probably do something before that like

Code:
``` if not snorm.any():     print "Warning: discarding triangle with degenerate normal vector. Are you sure it was really a triangle?"     return else:     snorm = snorm/np.linalg.norm(snorm) ```
My solution was as follows, but I guess it's the same thing. Anyway, this works unbelievably well based on the in-world models (and I'm not even using the voxel-casting properly yet!

Code:
```        snorm = np.cross(e[1],e[0])         slen  = np.linalg.norm(snorm)         if slen == 0:             print("Discarding triangle with point normal")             return         snorm = snorm/slen```
I'm going to try to implement coloring and texturing before I implement the proper percentage-based voxel filling. Here's a demo screenshot of a view I showed from my previous engine:

Edit: Pushed changes to repo.
I see a distinct lack of holes in that model Want to run it on the skyscraper model as well?
elfprince13 wrote:
I see a distinct lack of holes in that model Want to run it on the skyscraper model as well?
Same complete success! I remain sad that the spire on the top of this building refuses to generate under any algorithm, though.

Is it complaining about the spire, or failing silently? What does the geometry look like if you load it Sketchup?
elfprince13 wrote:
Is it complaining about the spire, or failing silently? What does the geometry look like if you load it Sketchup?
I can pull it off; it's almost as if it's a separate model. Let me dig into the model format and see if that's a thing it can do.
oh, it's probably a separate mesh! I'm quite certain Collada supports that
elfprince13 wrote:
oh, it's probably a separate mesh! I'm quite certain Collada supports that
Yeah, I'm already grabbing information from the 17 different geometries in this particular model (which I just verified by dumping out the geometry information while I converted it), but I must be missing something. I'll edit this post when I figure it out.

Edit: I think the thing that ends up in the basement of the building is the spire; it must have some offset information that I'm missing. Plan of action:

1) Return a list of voxels from the castray...() method
2) Texture everything. Then I can tell what exactly is going on.
3) Based on this, figure out where the spire (and other odd meshes) are going
4) Figure out how to do the volume-based voxel selection.
5) See if it's necessary to re-index points as per PyCollada documentation.
5) Handle poly and line primitives. Way to segment polys into tris?
Quote:
Edit: I think the thing that ends up in the basement of the building is the spire; it must have some offset information that I'm missing.

Does your model have named objects by any chance? That can make debugging a lot easier!

Quote:
5) See if it's necessary to re-index points as per PyCollada documentation.

You need to ask the mesh how the triangles are stored. tri-strips and tri-fans are both reasonably common in some places, and probably would need to be reindexed. or you'll get garbage. It's also good to check if the mesh declares a winding order (CW or CCW) and negate our face normals if it's CW.

Quote:
6) Handle poly and line primitives. Way to segment polys into tris?

Quads -> tris are super easy (link 0->1->2 and 2->3->0). Everything else, it's probably easy to convert to a tri-fan first and then reindex.
elfprince13 wrote:
Quote:
Edit: I think the thing that ends up in the basement of the building is the spire; it must have some offset information that I'm missing.

Does your model have named objects by any chance? That can make debugging a lot easier!
It looks like instead of just throwing all of the geometries down, I need to recurse down and down through model.scenes, which contains transforms and geometry references.

Edit: Here's the New York Life Building once more, fully-textured and with no holes. I even taught it to understand transparencies in textures, which seems to be working well (and I might make it use the glass block for translucent areas, although I suspect that not many 3D Warehouse models have proper windows. I wanted to use the textures to find the levels of floors, add glass windows, and fill in the floors, but I might have to leave that to you, elfprince13. I'd love any inspiration you can offer on how that might be feasible based on your current Fourier/Eigen exploration.

The first item on my agenda with this tomorrow will be to fix the iteration through the geometry, applying transforms as necessary. The second item will be thinning the surfaces created as necessary.

Edit: I implemented all the fun recursion and everything, and it seems to mostly work. However, when I actually go ahead convert the model, the spire's transformation matrix is very, very not applied properly:

Have you double checked that the matrices are being stored correctly? OpenGL prefers column-major indexing, but most programmers prefer row-major indexing. If the model format is storing it in column-major order to save processing time, and you're loading it in row-major order, bad things will happen.

In general, languages/frameworks designed for matrix math prefer column-major order because columns are vectors, and programmers who don't do linear algebra on a regular basis prefer row-major order because it's how we type it.

If you check that, and it's definitely not the problem, also make sure you're multiplying in the correct order as you proceed downwards. Matrix multiplication is not, in general, commutative, and applying transformations in reversed order is another common mistake.
That's an excellent point. However, I dumped out a few of the transformation matrices, and they looked like this:
Code:
```Transformed with:  [[  6.71456754e-01   7.41043389e-01   0.00000000e+00  -5.01993225e+02]  [ -7.41043389e-01   6.71456754e-01   0.00000000e+00   1.37287537e+02]  [  0.00000000e+00   0.00000000e+00   9.74690020e-01   6.00799463e+03]  [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00000000e+00]] ```
The [0,0,0,1] in the bottom row makes me think this is indeed a proper row-major matrix; it helps that PyCollada takes care of all of the XML -> numpy array conversion.

Edit: It occurs to me that transformation matrices in meters are not transformation matrices in model units (usually inches, sometimes centimeters).
Sadly, that could just as easily be from the opposite convention. In homogeneous coordinates, it's just as valid to have the last column be [[0],[0],[0],[1]] and put the translational coordinates in the bottom row, but such a transposition would muck about with your rotations.

If PyCollada -> numpy stuff is tested library code and not your code, the multiplication order seems like the more likely suspect.

But try both
By the way, here's one instance of where the matrix multiplication issues I was asking about in SAX come in handy. The triangle vertices are stored in an x-row, 3-column matrix, and the transformation matrix is 4x4. I was transposing the vertices, multiplying transform * vertices (in that order), then re-transposing the vertices, like so:

Code:
```            # Apply the transform, if there is one             v = np.copy(triset.vertex)             if None != xform:                 v.resize((v.shape[0],1+v.shape[1]))                 print v.shape                 v[:,3] = 1.                 v = v.transpose()                 v = np.dot(xform,v)                 v = v.transpose()             maxs = array([max(maxs[0],np.max(v[:,0])), \                           max(maxs[1],np.max(v[:,1])), \                           max(maxs[2],np.max(v[:,2]))])             mins = array([min(mins[0],np.min(v[:,0])), \                           min(mins[1],np.min(v[:,1])), \                           min(mins[2],np.min(v[:,2]))]) ```
There's actually two sets of transformations: once for vertices as a set to find model extents, and again when I actually convert triangles. The model extents transformations seem to come out ok, the triangle conversion does not. I'll look again at that code:
Code:
```            trilist = list(triset)             for tri in trilist:                 # Apply the transform, if there is one                 otv = tri.vertices                 if None != xform:                     tv = np.copy(tri.vertices)                     oshape = tv.shape                     tv.resize((tv.shape[0],1+tv.shape[1]))                     tv[:,3] = 1.                     tv = tv.transpose()                     tv = np.dot(xform,tv)                     tv = tv.transpose()                     tv = np.resize(tv,oshape)                     tri.vertices = tv                     print("Transformed with %s" % str(xform))                 ind.geom_tri2voxel(tri)                 tri.vertices = otv```
The big question is how are you producing your transform? It's the downward recursion, where, I'm assuming, each sub-transformation needs to be composed with it's predecessors, that I was concerned about.
elfprince13 wrote:
The big question is how are you producing your transform? It's the downward recursion, where, I'm assuming, each sub-transformation needs to be composed with it's predecessors, that I was concerned about.
It certainly does, and here's how I do that:
Code:
```def recurse_dive(node,mode,xform,ind):     xform2 = None     # Deal with fetching and possibly combining transforms     if node.transforms:         xform2 = node.transforms[0].matrix         if None != xform:             xform = np.dot(xform,xform2)         else:             xform = xform2     # Deal with the geometry, if it has any     for child in node.children:         if isinstance(child,collada.scene.NodeNode):             ind = recurse_dive(child.node,mode,xform,ind)         elif isinstance(child,collada.scene.Node):             ind = recurse_dive(child,mode,xform,ind)         elif isinstance(child,collada.scene.GeometryNode):             ind = recurse_geometry(child.geometry,mode,xform,ind)         else:             print xform             print("Found an unknown %s" % (type(child)))     return ind ```
See any issues? I can also check in this code. The fact that it's computing proper model extents makes me think only the transform right before geom_tri2voxel(tri) is bad.

Edit: Here's a debug-printed example of the applied transformation.

Code:
```[[  422.91607666   274.3604126   1407.63195801]  [  369.58792114   278.96340942  1163.76794434]  [  382.152771     239.80357361  1163.76794434]]  transformed with [[  6.71456754e-01   7.41043389e-01   0.00000000e+00  -5.01993225e+02]  [ -7.41043389e-01   6.71456754e-01   0.00000000e+00   1.37287537e+02]  [  0.00000000e+00   0.00000000e+00   9.74690020e-01   6.00799463e+03]  [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00000000e+00]] yielding [[ -1.47103987e+01   8.10952568e+00   7.37999951e+03]  [  1.00000000e+00   5.47721191e+02   7.11983398e+02]  [  6.38047510e+03   1.00000000e+00   2.79426636e+02]] ```

Edit #1 and #2: More debug output revealed that the first tv.resize() isn't doing what it's meant to. It's doing something closer to reshape().

Edit #3: It seems the distinction between a = numpy.resize(a,shape) and a.resize(shape) is one that's confusing me a lot. I'll have to re-read the numpy documentation, because I didn't expect a.resize(shape) to act like a.reshape(shape).

Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

»
» Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9, 10  Next
» All times are UTC - 5 Hours

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum