Skip to content

AU2013 Day 0: Dynamo/DesignScript Workshop

December 3, 2013

I spent my first day at Autodesk University 2013 in a workshop devoted to new visual programming tools Autodesk is developing. Literally. Developing as we speak. Demo versions were compiled the previous night, and it showed.

20131203-140858.jpg

Not quite “programming for non-programmers”

Autodesk continues to sell “programming for non-programmers.” Look at the amazing things you can do without writing a single line of code!

First, it isn’t true. By mid-morning  we were seeing plenty of text-based imperative programming code. Second, there’s nothing wrong with that. I completely understand that programming represents a new kink in the traditional design iteration loop. But that doesn’t mean designers shouldn’t step up. It is frustrating, however, to see the patronizing way that Autodesk caters to the masses, emphasizing the gulf between their products and their clients. In fact, the gap just might be the natural outcome of a software company protecting its territory. We are developers. You’re not. We won’t pretend to understand what you do, if you don’t pretend to understand what we do.

20131203-141018.jpg

The code abides

I find it particularly interesting that sometime between two months ago when I signed up for the DesignScript workshop, and last night, when they finished compiling, Autodesk made a major course correction: DesignScript and Dynamo merged.

Autodesk is vehement that DesignScript is great, that they can do great things with it because they own and control it. But the workshop examples don’t support that contention. There are some nice syntax hacks and the language is inherently functional – or as Autodesk terms it, “associative” – as opposed to imperative. But the value of that subtlety is likely to be lost on most designers. Many of the code examples presented explicitly overrode to an imperative mode.

And there was this bizarre response to a question about Python code-block nodes in Dynamo: we’ve provided multiple inputs but only one output because of language limitations. They then proceeded to show Dynamo nodes with multiple outputs as a comparison. Come on now.  No multiple outputs because it’s Python? I don’t think so. There’s no reason you cannot send an intermediate value to an output as well as a return statement. Perhaps what they mean is that they cannot automatically identify and publish all variables the way the DesignScript nodes do. A different variable identification mechanism would solve that problem with no trouble.

So the long and short of my first full day at Autodesk University? There is a lot of good work happening around Dynamo and there is a lot of hand waving going on regarding DesignScript, especially how it works with Dynamo and where it will go in the future. For now, if you are afraid of learning to program, with these tools you can still pretend you’re not.

Choosing the Right Programming Tools in CityEngine

November 26, 2013

In last week’s post on data driven visualization using CityEngine I showed that the results required using a variety of programming tools and techniques. No one programming environment or tool – traditional ArcGIS, CityEngine’s native “Computer Generated Architecture (CGA)” shape grammar, or direct Python scripting in CityEngine – was sufficient to accomplish all the tasks required. Ultimately, data-driven visualization requires a developer to be fleet of foot, recognizing which tool applies when, and implementing each to its best advantage.

GIS information is imported as shape layers.  This layer shows the lots that have been identified as underutilized.  On the right hand edge the object attributes panel shows additional GIS annotations for one particular lot that has been imported along with its shape.

GIS information is imported as shape layers. This layer shows the lots that have been identified as underutilized. On the right hand edge the object attributes panel shows additional GIS annotations for one particular lot that has been imported along with its shape.

ArcGIS is very powerful at manipulating the 2D geometry up front and then importing that data as annotated shapes into the CityEngine environment. Furthermore, Portland Metro has GIS experts more than ready to manipulate data by hand much more quickly than I can turn it into a stable program. Of course the goal was to see how far we could automate and replicate the process, so minimizing our dependency on manual processing was important as well.

Python programming creates routines using traditional imperative or functional programming techniques. The "Redevelop Routine" maps itself to the individual lots in a list, demolishes the existing buildings on a lot, and generates new building models based on its development type.

Python programming creates routines using traditional imperative or functional programming techniques. The “Redevelop Routine” maps itself to the individual lots in a list, demolishes the existing buildings on a lot, and generates new building models based on its development type.

CityEngine’s Python environment is sparse. There is little geometry support, so even determining simple relationships between shapes takes extra time to get right. Importing a good third-party geometry package will help greatly in future work. Debugging support is minimal. I found it frustrating to repeatedly track down small syntax errors by commenting out large chunks of code and adding them back in bit by bit. Ultimately my routines were filled with print statements saying “You made it to here!” But Python gives the ultimate flexibility. While I find indentation as syntax crudely reminiscent of Fortran and punch cards, Python’s list handling and comprehensions are elegant and fun to work. They are also tailor-made for manipulating the large sets of geometry shapes. Python also allows me to import and utilize outside file based data sources such as the comma-separated-value files I used to map development types to  building types. This is an important feature as it allows a future non-programmer to adjust and iterate through different data configurations without having to hack the code at each step.

This images shows a CGA rule file on the left hand side.

This images shows a CGA rule file on the left hand side.

Finally, CityEngine’s CGA shape grammar is by far the best environment for creating iconic shapes used to represent building types.  CGA rules also provide slick mechanisms for recursively processing shapes and selecting alternative rules based on percentage probabilities, which are not easily implemented in Python.

Completing Data-Driven Visualization in CityEngine

November 19, 2013
Planning grid overlaid with tax lots marked for potential redevelopment

Planning grid overlaid with tax lots marked for potential redevelopment

In an early September post I described a project for Portland Metro and the University of Oregon to implement data-driven visualization using ESRI’s CityEngine.  I completed the work at the end of October and wanted to share some terrific results.

The goal was to create a 3D model of an urban landscape similar to the simplified physical site models architects use, and to compare alternative development strategies using a real place and real development data. Starting with projected development patterns for the City of Portland’s Gateway District, we set out to show how the assigned development types would affect growth over the projected period, and to visualize alternative development choices. The progression of images here shows the action of the algorithms implemented.

Initial conditions mapped onto grid.

Initial conditions mapped onto grid.

First, we imported tax lot and building footprint information from Portland’s traditional GIS map sources. On top of that we overlaid a 264 x 264 foot grid used for development planning purposes. Each grid cell was annotated with one of sixteen proposed development types: Residential, Office, Light Industrial, etc., differentiated by color. To show initial conditions, I then created iconic building types on existing building footprints. Black buildings represent structures on under-valued properties identified for redevelopment. White structures represent buildings likely to remain intact. I based approximate building heights on Lidar data.

Using Python inside CityEngine, I mapped the lots to their containing development grid and development type. An Excel spreadsheet mapped each development type to a mixture of 6 building types. Sorting the tax lots by their valuation, I split them into three sets and “redeveloped” each set by deleting the existing buildings, sub-dividing the largest lots, and assigning an appropriate building type based on job and housing goals for the particular development. Using both color and form, I ultimately created iconic models representing each of the building types on a lot.

Gateway District shown with 66% of its targeted lots redeveloped

Gateway District shown with 66% of its targeted lots redeveloped

Employing the generative algorithms, it is simple to enter different data into the spreadsheets and update the resulting growth patterns under alternative scenarios. The final image above, for example, shows how the Gateway District might look if 66% of the lots identified as “underutilized” by the City of Portland were redeveloped.

Heliotrope Switches to Creative Commons Licensing!

November 5, 2013

HeliotropeFlowerHeliotrope is getting new development activity after a lull these past few months! First and foremost is a switch to a licensing model based on the Creative Commons Attribution-NoDerivs 3.0 license.  This change makes Heliotrope FREE for both educational and  commercial use, but requires that credit be given to Slate Shingle Studio and Heliotrope whenever it is used on a project.  The NoDerivs part of the license means that it is not an open-source project; the source code remains proprietary.  The exact legalese is still in work because the CC licenses are not really supposed to be applied to software, that is, they don’t mention source code.  For now, I hope that the intent is clear.  Please ask if you have any questions.

With this change, there is a new build of Heliotrope now available on Food4Rhino that does not require a license key.  This is still a beta version, although it is very stable and has held up well over the past several months. To move past beta I intend to update a few features based on the excellent suggestions I’ve received.  The primary change will be to incorporate a timezone offset into the Julian Day data-type Heliotrope supplies.  This will alleviate the need to keep specifying the offset whenever the Julian Day is displayed as string. I plan to incorporate the date string format generated by David Rutten’s calendar objects into the Julian Day type as well.  Finally, the shadow projection component will be modified to handle multiple receiving planes.

The upcoming 2.0 release will replace the PDF User Guide with a more flexible web based documentation tree here on Gnarly Architecture. For those interested in the shade analysis components included in the previous release, I have temporarily removed them from Heliotrope and will launch them separately as a sister project. I want to highlight the special nature of the geometric analysis they perform, above and beyond what is commonly available with solar vectors. Stay tuned and keep those cards and letters coming!

Geometrically Analyzing Solar Reflections

October 17, 2013
Initial view

Fig.1 Initial view of the problem: a flight controller standing in a control tower with a solar array to the north potentially reflecting the sun back at him.

Ten arcs are created over the solar season centered at the flight controller's viewpoint and showing the sun's annual movement range.

Fig. 2 Ten arcs centered at the flight controller’s viewpoint show the sun’s annual movement range.

In an NPR interview discussed here recently, MIT’s Christoph Reinhart briefly described a solar design problem he had worked on at a “nearby airport.” Using Radiance, Reinhart’s team made an in-depth study of the specular effects of different PV manufacturers in order to determine whether reflections from a nearby PV array would cause glare issues for a person standing in the control tower.

Lets back up for a moment and take a look at the overall problem of identifying when reflections would be in the flight controller’s view. To do this, I’ve recreated the scenario in the Rhino image shown above (Fig. 1). I’ve placed the model here in Portland, Oregon, at approximately 45 degrees north latitude. I have correspondingly angled the solar array at 45 degrees towards the south.

The windows surrounding the flight controller are projected onto the celestial sphere and the arcs divided into orange segments when the sun is visible and blue when it is shaded.

Fig.3 The windows surrounding the flight controller are projected onto the celestial sphere. Orange segments show when the sun is visible, blue when it is shaded.

First, I create a series of arcs showing the sun’s path relative to the controller’s viewpoint (Fig. 2).  Using the Heliotrope plugin I developed for Grasshopper, I generated ten arcs and evenly spread them between the winter solstice at the bottom and summer solstice at the top. The arcs lie on a sphere centered at the controller’s viewpoint.  When the solar paths (in orange) are flattened out onto a plane at the viewer’s height (shown in green), we get the same arcs shown in classical 2D Pilkington Sun Path Charts traditionally used in hand drawn solar analyses.

Let’s ignore the solar panel for a moment and first analyze the tower windows surrounding the viewer. From his viewpoint, we can project the window borders, along with the arcs, onto the solar sphere. This projection (Fig. 3) is done using the virtual heliodon component in Heliotrope. The periods in which the arcs appear in the window tell precisely when the sun will be directly in the controller’s view.

From the point of view, create vectors to the corners of the solar panel and reflect them upward to the sky using the reflection component in Heliotrope.

Fig.4 Using the reflection component in Heliotrope, we project vectors from the controller’s point of view to the corners of the solar panel and reflect them upward to the sky.

The orange arc segments now show when the sun is in the controller’s view and blue segments show when it is shaded by the surrounding walls. Of course this is an excellent starting point for determining when additional shade devices are needed to protect the viewer.

So far, so good. However, understanding when the sun will reflect off the solar panel to the viewer is a slightly more complicated problem. The first step here is to project vectors from the viewer’s eye point to the corners of the solar panel and reflect them towards the sky as shown in Fig.4.  As with the projected windows, these reflected vectors point toward the days and times when the sun will be reflected up to the viewer. Because they are now separated out to different base locations, however, it is unclear how to transfer that information onto our solar sphere.

To do so, we must remember that we are visualizing the sun as infinitely far away, using a Ptolemaic view in which the universe is not just earth centric, but in fact centered right on us!  With this assumption, we can move the base points of the reflected vectors wherever we like and assume they are still pointing to where the sun’s reflection will become problematic.

In the final picture (Fig. 5) we group the vectors together at the controller’s viewpoint, project them out onto the celestial sphere, and again split the solar arcs into hidden and exposed portions that now tell us when the sun’s reflection will appear in the viewer’s eyes. To the uninitiated this may appear confusing since it seems to say that the sun is coming directly through the tower roof.  What it correctly tells us, however, is that during the dates and times the sun appears in those positions on the solar arcs, it will also reflect off the PV panel to the controller’s viewpoint. I’ll leave it as an exercise for the reader to add the window and additional shade elements into the multi-legged reflection path.

The reflections of the solar vectors are clustered to the viewer's eye point and projected onto the celestial sphere, showing when the sun's reflections will be a problem in the flight controller's view.

Fig. 5 The reflections of the solar vectors are clustered to the viewer’s eye point and projected onto the celestial sphere, showing when the sun’s reflections will be a problem in the flight controller’s view.

Calatrava and China: Bad News for Both this Week

September 28, 2013

Two fascinating articles this week detailed architectural problems from opposite ends of the world. The first is a New York Times article on problems in Valencia, Spain with Santiago Calatrava’s works there.  Justifiably proud of their native son, Valencia bought into engineer/architect Calatrava big time. Local municipalities are now paying a price they can little afford, with enormous cost overruns and maintenance nightmares. New Yorkers have a vested interest in this story because their new PATH station at Ground-Zero, designed by Calatrava, is also facing major cost overruns.

Calatrava's Sundial Bridge in Redding, California.  "Slippery When Wet" sign.  Glass decks are pretty and good for the fish below... but really?

Calatrava’s Sundial Bridge in Redding, California. Notice the large “Slippery When Wet” sign. Glass decks are pretty and good for the fish below, but not very practical.

A recent visit to the Sundial Bridge in Redding, California, brought home to me how much I love Calatrava’s work, particularly his bridges.  There is something about the process of traversing a fragile pathway over a hazardous obstacle that emphasizes the beauty and elegance of the structure he creates to hold it up. The Sundial is well worth a quick detour off Interstate 5. It features a lovely sculptural cable-stayed design. The support tower also acts as the gnomon of a giant sundial, the hours demarcated along a nearby walkway.

From the other side of the world, this op-ed article in Nature Climate Change tells why the approval of 9 new synthetic natural gas (SNG) plants in China is a catastrophic solution to their coal-generated air pollution problems.  The goal is to convert Chinese cities from coal to SNG.  The problem is that the conversion from coal to SNG creates far more greenhouse gasses and ultimately represents a greater threat to the earth as a whole. This ia just another case of the cure being worse than the disease, as is so often true with energy solutions that try to recast the problems from one form of energy production into another without addressing the real issue of over-consumption.

Finding What’s New Under the Sun

September 25, 2013
by

Dr. Christoph Reinhart spoke on NPR last week about Raphael Vinoly’s Walkie Talkie making the news  for melting a nearby Jaguar automobile (See my previous post on the subject here.)  Dr Reinhart, who leads the Sustainable Design Lab at MIT is responsible for the DIVA plugin for Rhino/Grasshopper which interfaces Greg Ward‘s Radiance ray tracing engine to Rhino models.  He gave a good interview, mentioning several recent examples of the solar light pollution problem including the Museum Tower in Dallas and the Vdara in Las Vegas.

Unfortunately the interviewer gave the impression that ray tracing was a new technique of Dr Reinhart’s, a solution now being applied to buildings  that is somehow sufficient to solve the ongoing problem of reflected light pollution. Of course ray tracing has  been around for decades, and Radiance for over 15 years.  DIVA, along with a number of other interfaces such as OpenStudio and LadyBug, provides access to the much more complex underlying software engine that they all have in common.

The key feature of Radiance is that it is one of a few physically accurate rendering engines, where most architectural rendering tools shortcut reality in order to quickly generate beautifully lighted scenes.  There are, however, two fundamental flaws in the ray-tracing approach used by Radiance.  First, the power, accuracy and detail of the model lends itself to analysis rather than design.  Its one thing to model a building after it has been designed and built and then figure out how much window film treatment is required to reduce the resulting to tolerable levels.  Its another to provide a fleet footed capability to help design the building right in the first place.  We need tools that help designers as the form is being created, not after.

And that leads to the second flaw, Radiance analyzes form but doesn’t require actively thinking about it.  The problems in these new buildings are all about the geometry and designers need to understand the effect of the geometry directly.  Radiance generates random vectors from a source or observation point and follows them to see where they happen to end up.  Simple and easy to generate when you know nothing about your environment, this approach makes no attempt to use underlying geometry to guide the analysis process.  As a result it has to work harder and provides little information to the designer to increase his or her understanding of problems in the underlying geometry.

Returning to the example of the University Of Maryland Ellipse I recently studied for HDR Architects in Maryland, I did much of this analysis using the solar vector generator in Heliotrope guided by my understanding of the geometry I was examining.  Essentially I used my knowledge of the building geometry to construct a custom ray-tracing of the specific problem occurring there.  I later ran a Radiance study of the same space and compared the two results over an annual period.  Notice the green dots speckled through the Radiance output?  Those dots represent points at which the random ray generation algorithm concluded an insignificant amount of energy was falling on that location over the annual period. The difference between a green dot and an neighboring red one is purely the result of the random number generation and an insufficient number of test iterations run to smooth out the results?   How many runs would be sufficient?   That is unknowable ahead of time, just more than the number I ran.

Annual irradiance study using Radiance and the DIVA interface.  The green dots mixed in with red in the center of the image are random anomalies.

Annual irradiance study using Radiance and the DIVA interface. The green dots mixed in with red in the center of the image are random anomalies.

The directed ray-tracing approach generated smoother and more consistent results.  Both studies say the space below is hot.  Radiance gave energy numbers as well, however the quality of those numbers is quite suspect since I had no real materials data to base the window reflectivity upon.  Furthermore, the directed ray-tracing approach gave a much more precise understanding of where and the hot zone would occur and how it would travel over the space during the day.

Annual reflection study using custom raytracing provides a smoother and clearer assessment of the hot zone coverage without the anomalies.

Annual reflection study using custom ray-tracing provides a smoother and clearer assessment of the hot zone coverage without the anomalies.

We can do even better, however, by projecting the actual geometry along the ray paths to understand how it obscures or reflects the sun.  In the physical world a Heliodon does exactly this, taking the surrounding features and projecting them onto a spherical dome surface that provides an immediate understanding of when an observer will see the sun throughout the year.  It eliminates the need for the hour by hour animations typically used and described in Reinhart’s interview.  I developed a virtual Heliodon component in the new version of Heliotrope which provides the same capability.   That component incorporates a proprietary spherical geometry engine which makes it blazingly efficient.  So efficient that it is possible to project urban scale geometry and update it in real time.  Below is an image of solar access at a point in one of Portland’s downtown open pedestrian plazas, with the surrounding building geometry projected onto it.

This virtual Heliodon placed in Director's Park in downtown Portland, illustrates how urban geometry can be projected onto a spherical element giving a bird's eye view of annual shade coverage at a point in the square, much in the same way that hand-drawn Pilkington sun calculators were used in the past.

This virtual Heliodon placed in Director’s Park in downtown Portland, illustrates how urban geometry can be projected onto a spherical element giving a bird’s eye view of annual shade coverage at a point in the square.  If flattened to two-dimensions, this projection is effectively the same as the hand-drawn Pilkington sun calculators used in the past.

Dr Reinhart brought up an additional example of a problem they had recently studied using traditional ray-tracing that might have benefited from a geometric projection approach, a recent analysis they performed for a airport where a solar pv panel was causing reflected glare issues into the control tower.  Using geometric projections we can answer the question of when the sun will be reflected into the observer’s field of view instead of using animation studies and provide a quick, clear and accurate understanding of the overall design situation.  Please return next week and I will work the details up for that example in the next post.