I’m just finishing up a Christmas gift for my parents: a CNC route of Koma Kulshan (Mount Baker) in Pseudotsuga menziesii (Douglas Fir). It’s a sculptural piece, measuring about 20″ x 25″. It was routed on a recently constructed 8′ x 4′ CNC router known as Frankenstein. Before I go on, a big thanks to Scott Crawford for helping out with this little project. For those curious: more details on Frankenstein are here.
Here’s the backstory: both of my parents devoted their entire careers to the National Park Service. My dad’s first “real job” was with Olympic National Park and, a few years ago, he retired from decades of service at North Cascades National Park. (He keeps coming out of retirement to do special projects for the park). My mom was one of the first female law-enforcement rangers in the National Park Service and is still a seasonal employee at North Cascades. As a park kid, I grew up living under tall trees and never far from the mountains and rivers without end. Nearly all our family vacations were to other national parks…much to the annoyance of my sister and I. I now see the National Parks as a sort of organized religion where the cathedral is the wilderness. Those devoted to the protection of wild spaces fill their homes with references to the icons of their faith. My childhood home was no exception. I’m proud to have grown up in this sort of church.
Since my parents retired, they have been doing what retired people do in the Northwest: figuring out creative ways to escape during the most rainy, dreary months (November – June). My mom is originally from Fresno, California. Though she lived 30+ years in the Northwest and considers it home, she always missed the Central San Joaquin. She wanted the best of both worlds (but Fresno? FresYES!) so they built a house down south amongst the vineyards and orchards. They are officially “snowbirds.” Since they moved in, I’ve made it my quest to fill their new house with tokens of the Northwest: photos, maps, books…and now, a largish sculpture of a mountain that we lived beneath cut from the ubiquitous tree of the Pacific Northwest. I feel this is only fair, as my childhood homes were filled with reminders of California’s great parks: Yosemite, Sequoia Kings Canyon…this list could go on and on. Naturally, I wanted their new Fresno home to smell of Doug Fir and be filled with constant reminders of life passed beneath dormant volcanoes…
I’m almost done finishing up the work – sanding, sealing, etc – and I’m happy with it. It’s very tactile. I think it evokes those aging plaster 3D relief maps – covered in fingerprints – that you typically find in the lobby of most National Park visitor centers. The digital accuracy of the CNC tool paths works well with the analog imprefections in the wood. The grain nearly follows the topo-lines of the geography and there are spots where the router carved through a knot and it reads as a tarn that may or may not exist. It definitely smells of Doug Fir. Most importantly, I think my parents will like it and remind them to return north at the beginning of summer.
Here’s a quick video of the modeling and routing process …
So, what to route next?
Lately, I’ve been exploring the intersection between sketching, coding, and parametric modeling. I’ve been working on an app for the iPad that relies on some interactions that will hopefully lead to a balance between “manual” and parametric modeling.
Grasshopper and SketchUp are two of my favorite design applications, so, if you know those beautifully crafted softwares, you will see where I’m going with this. At this point, I’ve only just scratched the surface, and I thought I’d share a little bit of what I’ve been up to…
More to come.
“The construction of a model […] was for him a miracle of equilibrium between principles (left in shadow) and experience (elusive), but the result should be more substantial than either. In a well-made model, in fact, every detail must be conditioned by the others, so that everything holds together in absolute coherence, as in a mechanism where if one gear jams, everything jams. A model […] is that in which nothing has to be changed, that which works perfectly; whereas reality, as we see clearly, does not work and constantly falls to pieces; so we must force it, more or less roughly, to assume the form of the model.”
“A delicate job of adjustment was then required, making gradual corrections in the model so it would approach a possible reality, and in reality to make it approach the model. In fact, the degree of pliability is not unlimited […]; even the most rigid models can show some unexpected elasticity. In other words, if the model does not succeed in transforming reality, reality must succeed in transforming the model.”
“Mr. Palomar’s rule had gradually been changing: now he needed a great variety of models, whose elements could be transformed in order to arrive at one that would best fit reality, a reality that, for its own part, was always made up of many different realities, in time and in space.”
– Italo Calvino, Mr. Palomar, (translated from Italian by William Weaver), Harcourt Brace Jovanovich) pgs. 109 -110
First “Hello World” cut with the newly built “Frankenstein” 3-axis CNC router. This obligatory doubly-curved surface was cut in Doug Fir with a 1/4″ ball bit. It took about 15 minutes to cut.
Just came across a method of using OGLE and GLIntercept to dump geometry from GoogleEarth to the OBJ file format. I’ve summarized the steps here, but complete and detailed instructions can be found on the EyeBeam OGLE website.
DISCLAIMER: The following is for illustration purposes only. The following text does not advocate for or condone the commercial use of copyrighted materials without the consent of the owner(s) or author(s). Furthermore, since this process requires changing some system libraries (dll files), the author of this text is not responsible for damages to your computer or loss of data. Follow these instructions at your own risk.
1. Install GLIntercept…
2. Copy the system .dll (
C:\WINDOWS\system32\opengl32.dll) to your GoogleEarth directory (name it
opengl32.orig.dll) as backup.
Continue reading “GoogleEarth to OBJ”
Here are some basic instructions for converting/importing GIS building and terrain shapefile data into 3DS, Rhino, etc. This may not be the most elegant or efficient manner of conversion out there, but it does the job.
The process of converting GIS building and terrain data into a usable, 3D model, is a relatively simple (but not necessarily) straightforward task. The general idea is to use GIS data, including non-graphical data fields like ‘apex’ and ‘elevation,’ to create a 3D model that can later be edited with various 3D modeling software. For buildings, the method is to translate the building footprints (from the GIS shapefile), to their appropriate altitude (resting on the ground), then to extrude the footprints to their appropriate height (the apex of the building), and then export it all as a VRML geometry. For terrain, the method is to convert a contour map into a TIN (Triangulated Irregular Network), then to a Raster image, then back to a TIN, and then export it as VRML geometry.
Continue reading “GIS to 3DS”
SynthEyes is a software that pulls 3D coordinate and positional information from a series of 2D images (a video clip). Using a combination of trigonometry and computer-vision techniques, it is possible to infer a virtual camera location that corresponds to the position of the camera used to record a video clip. This position can then be remapped to a series of “position tracks” within the scene to give you coordinates upon which to composite your virtual building model. The output from SynthEyes can then be imported into Bentley Microstation (the second part of this tutorial) where you can setup the lighting and rendering parameters to animate the scene. Begin the matchmoving process in SynthEyes.
Open up shot in SynthEyes. File -> Open. SynthEyes accepts common video formats and should read all the video “meta-data” from your video if it was generated digitally. The frame-rate, interlacing, image and pixel aspect ratios should be correct. If your clip was generated with an older analog camera, you may need to track down all this information from the video capture program you used to import the clip.
Continue reading “Matchmoving for Microstation”
SKP -> OBJ -> DGN
NOTE: This workflow used SketchUp 6 and Bentley Architecture XM. Subsequent versions of either application may have better (or worse) performance using native SKP (h/t to SF).
There may be situations during the design process where importing SketchUp models into Microstation is necessary. For example, you may want to search Google’s 3D Warehouse for context buildings that you can reference into your Bentley Architecture model for a context rendering of the site. The best way to go from SketchUp to Microstation is through .OBJ. OBJ is the Lightwave file format and seems to be very material/texture friendly. Microstation has no trouble digesting the SketchUp assigned materials and importing them if the correct settings are applied. Follow the directions below to get (most, if not all) of the correct textures and scale.
Continue reading “SketchUp to Microstation (via OBJ)”
ARchitecture Hall is an interface in which Augmented Reality meets Building Information Modeling. I created during (around June 2007) of my MS in Design Computing in the Design Machine Group, in the College of Architecture and Urban Planning at the UW. It is intended to be a simple, intuitive, Tangible User Interface with a MagicLens for viewing BIM models interactively. It was published in and presented at eCAADe 2008 in Antwerp, Belgium.
The interaction was quite fluid and the polygon count very high: somewhere around 200,000 polygons displayed at 30 fps. Multiple stencil-buffers were employed for the lens effect (thanks to Julian Looser for his inspiration on this one). A simple Head’s Up Display (HUD) allows the user to toggle layer visibility. Check out this video of the basic interaction…
MxR Architecture, my MS thesis project, mixes physical and virtual models together in a single interface. Using Mixed Reality, virtual models can be superimposed over real models in real-time, allowing the designer/user/client to make changes, simulate sun angles, run agents studies, add components as well as transition to a fully-immersive VR view at any time.
Check out the this video for the basic interaction…
MxR – pronounced “mixer” – is a Mixed/Augmented Reality system intended to support collaboration during early phases of architectural design. MxR allows an interdisciplinary group of practitioners and stakeholders to gather around a table, discuss and test different hypotheses, visualize results, simulate different physical systems, and generate simple forms. MxR is also a test-bed for collaborative interactions and demonstrates different configuration potentials, from exploration of individual alternatives to group discussion around a physical model. As a MR-VR transitional interface, MxR allows for movement along the reality-virtuality continuum, while employing a simple tangible user-interface and a MagicLens interaction technique.