Hi there!
Long time lurker, first time posting.
I currently have a IronPython script in Rhino that given a CSV file which includes three columns of XYZ coordinates along with other data, it checks each row’s coordinates against an array of solids in BREP format to see which solid the XYZ coordinate is inside.
My current approach is to first convert all the rhino solids in a 3dm file into BREPs and store them in a dictionary in memory with the ID of the solid (think of a room number in a hotel) as the key and the BREP data of the solid as the value. This is done only once per script execution and only takes a few seconds.
Then, for each row in the CSV, convert the XYZ columns into a Point3d, then traverse the dictionary of BREPs and use the brep.IsPointInside() function to check if that Point3d is inside each of the solids. Store matches and return as extra columns on the CSV file. This is currently a nested for loop, so O(N^2) time complexity.
This has been working fine, but for large CSV files (hundreds of thousands of rows), this can take several minutes and our data sets keep growing.
Do you have any proposals on how I could speed this up? Multi threading for parallel processing? Using some kind of Kd-Tree implementation? Finding nearest neighbors first instead of traversing all solids? Would switching to C# instead of Python give a noticeable boost in performance?
Eventually I would like to migrate this to use Rhino compute, not sure if that changes anything.
Thank you!
1 post - 1 participant