Posts

Route Baker

Mount Baker - CNC Route 01I’m just finishing up a Christmas gift for my parents: a CNC route of Koma Kulshan (Mount Baker) in Pseudotsuga menziesii (Douglas Fir).  It’s a sculptural piece, measuring about 20″ x 25″.  It was routed on a recently constructed 8′ x 4′ CNC router known as Frankenstein.  Before I go on, a big thanks to Scott Crawford for helping out with this little project.  For those curious: more details on Frankenstein are here.

Here’s the backstory: both of my parents devoted their entire careers to the National Park Service.  My dad’s first “real job” was with Olympic National Park and, a few years ago, he retired from decades of service at North Cascades National Park.  (He keeps coming out of retirement to do special projects for the park).  My mom was one of the first female law-enforcement rangers in the National Park Service and is still a seasonal employee at North Cascades.  As a park kid, I grew up living under tall trees and never far from the mountains and rivers without end.  Nearly all our family vacations were to other national parks…much to the annoyance of my sister and I.  I now see the National Parks as a sort of organized religion where the cathedral is the wilderness.  Those devoted to the protection of wild spaces fill their homes with references to the icons of their faith.  My childhood home was no exception.  I’m proud to have grown up in this sort of church.

Since my parents retired, they have been doing what retired people do in the Northwest: figuring out creative ways to escape during the most rainy, dreary months (November – June).  My mom is originally from Fresno, California.  Though she lived 30+ years in the Northwest and considers it home, she always missed the Central San Joaquin.  She wanted the best of both worlds (but Fresno?  FresYES!) so they built a house down south amongst the vineyards and orchards.  They are officially “snowbirds.”  Since they moved in, I’ve made it my quest to fill their new house with tokens of the Northwest: photos, maps, books…and now, a largish sculpture of a mountain that we lived beneath cut from the ubiquitous tree of the Pacific Northwest.  I feel this is only fair, as my childhood homes were filled with reminders of California’s great parks: Yosemite, Sequoia Kings Canyon…this list could go on and on.  Naturally, I wanted their new Fresno home to smell of Doug Fir and be filled with constant reminders of life passed beneath dormant volcanoes…

Mount Baker - CNC Route 02

I’m almost done finishing up the work – sanding, sealing, etc – and I’m happy with it.   It’s very tactile.  I think it evokes those aging plaster 3D relief maps – covered in fingerprints – that you typically find in the lobby of most National Park visitor centers.  The digital accuracy of the CNC tool paths works well with the analog imprefections in the wood.  The grain nearly follows the topo-lines of the geography and there are spots where the router carved through a knot and it reads as a tarn that may or may not exist.  It definitely smells of Doug Fir.  Most importantly, I think my parents will like it and remind them to  return north at the beginning of summer.

Here’s a quick video of the modeling and routing process …

So, what to route next?

Sketching Dynamic Geometry

Lately, I’ve been exploring the intersection between sketching, coding, and parametric modeling.  I’ve been working on an app for the iPad that relies on some interactions that will hopefully lead to a balance between “manual” and parametric modeling.

Grasshopper and SketchUp are two of my favorite design applications, so, if you know those beautifully crafted softwares, you will see where I’m going with this.  At this point, I’ve only just scratched the surface, and I thought I’d share a little bit of what I’ve been up to…

More to come.

Calvino on Models

“The construction of a model […] was for him a miracle of equilibrium between principles (left in shadow) and experience (elusive), but the result should be more substantial than either.  In a well-made model, in fact, every detail must be conditioned by the others, so that everything holds together in absolute coherence, as in a mechanism where if one gear jams, everything jams.  A model […] is that in which nothing has to be changed, that which works perfectly; whereas reality, as we see clearly, does not work and constantly falls to pieces; so we must force it, more or less roughly, to assume the form of the model.”

“A delicate job of adjustment was then required, making gradual corrections in the model so it would approach a possible reality, and in reality to make it approach the model.  In fact, the degree of pliability is not unlimited […]; even the most rigid models can show some unexpected elasticity.  In other words, if the model does not succeed in transforming reality, reality must succeed in transforming the model.”

“Mr. Palomar’s rule had gradually been changing: now he needed a great variety of models, whose elements could be transformed in order to arrive at one that would best fit reality, a reality that, for its own part, was always made up of many different realities, in time and in space.”

– Italo Calvino, Mr. Palomar, (translated from Italian by William Weaver), Harcourt Brace Jovanovich) pgs. 109 -110

GoogleEarth to OBJ

OBJ from GoogleEarth

Just came across a method of using OGLE and GLIntercept to dump geometry from GoogleEarth to the OBJ file format.  I’ve summarized the steps here, but complete and detailed instructions can be found on the EyeBeam OGLE website.

DISCLAIMER:  The following is for illustration purposes only. The following text does not advocate for or condone the commercial use of copyrighted materials without the consent of the owner(s) or author(s).  Furthermore, since this process requires changing some system libraries (dll files), the author of this text is not responsible for damages to your computer or loss of data.  Follow these instructions at your own risk.

Prerequisites:

Instructions:

1.     Install GLIntercept…

2.     Copy the system .dll (C:\WINDOWS\system32\opengl32.dll) to your GoogleEarth directory (name it opengl32.orig.dll) as backup.

Continue reading “GoogleEarth to OBJ”

GIS to 3DS

3dsmaxviewHere are some basic instructions for converting/importing GIS building and terrain shapefile data into 3DS, Rhino, etc. This may not be the most elegant or efficient manner of conversion out there, but it does the job.

The process of converting GIS building and terrain data into a usable, 3D model, is a relatively simple (but not necessarily) straightforward task.  The general idea is to use GIS data, including non-graphical data fields like ‘apex’ and ‘elevation,’ to create a 3D model that can later be edited with various 3D modeling software.  For buildings, the method is to translate the building footprints (from the GIS shapefile), to their appropriate altitude (resting on the ground), then to extrude the footprints to their appropriate height (the apex of the building), and then export it all as a VRML geometry.  For terrain, the method is to convert a contour map into a TIN (Triangulated Irregular Network), then to a Raster image, then back to a TIN, and then export it as VRML geometry.

Continue reading “GIS to 3DS”

Agents, Crowds, Architectures

picture-1“For seeing life is but a motion of limbs, the beginning whereof is in some principal part within, why may we not say that all automata (engines that move themselves by springs and wheels as doth a watch) have an artificial life?”
-Thomas Hobbes, Leviathan, 1660.

For sheer uncanny sci-fi weirdness, nothing tops reading the abstracts for the funded projects on the Defense Science Board’s website.  The DSB is the public face of funding for academic research relevant to DARPA – the Defense Advanced Research Projects Agency.  You may know DARPA from such projects as directed heat-ray weaponry, sub-vocalization detection, passive radar, the Internet, etc.  DARPA is not as cloak-and-dagger as it may seem from its ominous name; in the defense world, it’s the Pentagon’s way of funding those wacky little “off the wall” projects that might not otherwise receive support.  You know, those cute little defenseless defense projects?  DARPA likes to call those “strategic technology vectors.”   This year, one of the main strategic vectors being pushed forward by the Pentagon is in a field called “agent modeling” or “crowd dynamics.”  DARPA has various terms for this line of research, from crowd theory to “human terrain mapping” to “social simulation.”  You can think of this broadly as the science of individual and collective behavior situated in an environment.

Shortly after the US-led invasion of Iraq, members of the US military entrusted with coordinating crowd control and counter-insurgency measures were met with the problem of navigating and intervening in unknown social and political territory.  Models of collective human behavior were thought critical to effective planning and “logistics.”   This is not the first time that the Pentagon has decided to focus on quantitative social sciences of crowds and collective behavior.  During the Vietnam War, DARPA launched an ambitious endeavor called “Project Camelot.”  DARPA’s director, R.L. Sproul, testified before congress that “it is [our] primary thesis that remote area warfare is controlled in a major way by the environment in which the warfare occurs; by the sociological and anthropological characteristics of the people involved in the war.” (McFate, 2005).  Project Camelot was tested in Chile, but was met with such local resistance and negative press domestically that Secretary of Defense Robert McNamara cancelled the program.  Much has happened since Sproul’s time and, as of 2003, this line of research is back, with new tools and new funding.

What is in question here is simulation, more specifically the simulation of people and crowds.  Simulation of physical systems is now making in-roads into architectural practice.  The facility to simulate natural processes is greatly aided by the low cost and ubiquity of computation.  The ability to simulate lighting conditions, thermal properties, acoustic effects, and structural stability, are all becoming part of sustainable design practice.  Thermal simulation (employing software such as Ecotect) can give us an accurate picture of how a space will behave under different heating/cooling and seasonal conditions, with varying numbers of bodies occupying a space.  Physically-based rendering (using Radiance) can produce images which actually contain data about light.  But behavior of light in a space is fundamentally different from the behavior of people…right?

Simulating human behavior is nothing terribly novel.  Recently, the notion of computational simulated agents has made its way into popular culture.  The Sims by Electronic Arts – the best selling computer game of all time – casts the player as the omniscient controller of a family of simulated agents.  Massive – a software tool developed for the film trilogy The Lord of the Rings – allows the user to simulate a “massive” crowd of autonomous agents who interact and exhibit complex emergent behavior.  However, these games, tools and visualizations are making their way into design practice.  What does this mean for architecture?  Imagine your SketchUp model populated with hundreds of animated characters.  Instead of the little outlined silhouettes frozen in mid-stride, exploratory agents walk through, inhabit, use, abuse and dwell in your design.

How does this work?  Just how predictable are you?  So many theories, so little time.  Biologists have long been fascinated with collective behavior in the animal kingdom.  Flocks, swarms, schools, and herds all display the hallmarks of collective emergent organization springing from the application of simple rules to large systems.  Consider two models of your behavior: “top-down” and “bottom-up.”  The top-down view sees your behavior as a direct result of the layout of an environment.  The bottom-up approach casts an person’s behavior as the result of a variable-juggling cognitively calculated reaction to input about an environment.  Both represent you as a convenient computational abstraction.  You are not you. You are an agent, within a system.  How you behave is entirely up to the system.

This brings up a number of theoretical and technical questions: What behavior should the agent simulate?  Does the agent exhibit this behavior?  Do humans behave in the same way?  How do groups of humans behave?  Do models exhibit these group behaviors?  Can models capture something beyond simply behavior? Can they capture emotion?  Mood?  Cognitive process?    What does this have to do with architecture?  Just how predictable are people?  Should we model agents and crowds at all?  Putting aside the final normative questions for the moment, let’s first consider the top-down approach.

Continue reading “Agents, Crowds, Architectures”

P-cha K-cha

pechakucha_seattlegoingrogue_web I’m nolonger a Pecha Kucha newbie.  Along with a bunch of uber-talented presenters/artists/burlesque dancers, I took part in Pecha Kucha Seattle Chapter #12 at Ouch My Eye in SoDo this past Thursday evening.  I presented a quick 6:40 called “Some Motifs on Early Adopters” which was a pseudo-autobiographical ode to the innovators, early adopters (and fast followers) I’ve crossed paths with.  Pecha Kucha is a great format…it avoids the slow versions of “Death By Powerpoint,” but it is certainly still possible to commit swift “Suicide By Powerpoint” (if one wanted to).  None of the presentations during #12 came anywhere close…all were extremely funny and energetic.  I just wish I weren’t suffering from a massive sinus-infection before, during, and after the event.  Even still, the energy in the room kept the pain at bay, and I’m glad to see all the creative crowd get nice and unprofessional in front of each other.  Hats off to everyone who was in the room.  Pecha Kucha is my new favorite Japanese onomatopoeia (up there with puru puru puru & wan wan, wan wan).

Matchmoving for Microstation

SynthEyes-to-Microstation
SynthEyes is a software that pulls 3D coordinate and positional information from a series of 2D images (a video clip). Using a combination of trigonometry and computer-vision techniques, it is possible to infer a virtual camera location that corresponds to the position of the camera used to record a video clip. This position can then be remapped to a series of “position tracks” within the scene to give you coordinates upon which to composite your virtual building model. The output from SynthEyes can then be imported into Bentley Microstation (the second part of this tutorial) where you can setup the lighting and rendering parameters to animate the scene. Begin the matchmoving process in SynthEyes.

12-1-2008-9-00-30-amOpen up shot in SynthEyes. File -> Open. SynthEyes accepts common video formats and should read all the video “meta-data” from your video if it was generated digitally. The frame-rate, interlacing, image and pixel aspect ratios should be correct. If your clip was generated with an older analog camera, you may need to track down all this information from the video capture program you used to import the clip.

Continue reading “Matchmoving for Microstation”

SketchUp to Microstation (via OBJ)

12-4-2008-4-03-53-pmSKP -> OBJ -> DGN

NOTE: This workflow used SketchUp 6 and Bentley Architecture XM.  Subsequent versions of either application may have better (or worse) performance using native SKP (h/t to SF).

There may be situations during the design process where importing SketchUp models into Microstation is necessary. For example, you may want to search Google’s 3D Warehouse for context buildings that you can reference into your Bentley Architecture model for a context rendering of the site. The best way to go from SketchUp to Microstation is through .OBJ. OBJ is the Lightwave file format and seems to be very material/texture friendly. Microstation has no trouble digesting the SketchUp assigned materials and importing them if the correct settings are applied. Follow the directions below to get (most, if not all) of the correct textures and scale.

Continue reading “SketchUp to Microstation (via OBJ)”