Skip to content

Heliotrope Takes Manhattan(henge)!

July 7, 2014

Today I’m presenting the technology behind Heliotrope at the American Solar Energy Society (ASES) Solar 2014 Conference in San Francisco.

I developed Heliotrope as a plugin for Rhino and Grasshopper that performs solar analysis using geometric projections. Algorithmically, its most complex components are the spherical shadow projections, or what I like to call “shadow bubbles.” A shadow bubble shows the annual exposure of a single point location to the sun. It is the mathematical “dual” of a typical computer shadow study which shows shading at a single point in time over an area.

Since this week also marks a Manhattanhenge occurrence on July 12th, I thought it would be fun to use a Heliotrope shadow bubble to analyze the coming event.

An overview of the 3d model of Manhattan used for the analysis.

Overview of the 3D model of Manhattan used for the analysis

Manhattanhenge, a term coined by astrophysicist Neil deGrasse Tyson, occurs twice yearly when the setting sun aligns with the east-west streets of the main street grid in the borough of Manhattan in New York City. The biggest challenge for my quick analysis, was finding a model of Manhattan in short order. The top Google result shows Harvard GSD has huge online GIS models of Manhattan and Boston, but they are only available for those online at Harvard. Not so useful for a non-Harvard person living in Portland. After some searching, however, I was able to find a nice block model at TurboSquid for only $6. The model came in an Autodesk fbx exchange format, which I easily converted into a dwg using a free Autodesk fbx converter and from there, loaded into Rhino.

Overview of model with 34th Street and the shadow bubble in place in front of the Empire State Building.

Overview of model with 34th Street and the shadow bubble in place in front of the Empire State Building

For simplicity, I narrowed the model down to just the buildings along 34th Street, which is one of the most dramatic locations from which to observe the event. Next I created a shadow bubble – a projection of the urban geometry surrounding a single viewpoint onto a sphere centered at that point – in the street in front of the Empire State Building.

The shadows of overlapping buildings may be combined into single geometric elements that, when combined with solar arcs mapped onto the same sphere, result in a bubble that shows when that point of view will be exposed to or shaded from the sun over the entire year. Heliotrope does this projection particularly well because it uses a spherical geometry model. In spherical geometry, points on the sphere may be treated as two dimensional locations because the 3rd dimension of the sphere’s radius is arbitrarily set to 1.0 for analysis and then scaled as needed afterwards. Line segments on the building models project as great circle arcs on the sphere. Intersecting these arcs is easier than intersecting general 3D curves in space. Such simplifications make Heliotrope’s spherical geometry approach faster and more accurate than attempting to do the same analysis in Cartesian coordinate space.

View looking down 34th Street from the west with solar arcs for May 29th and July 12th dropping into view just at sunset.

View down 34th Street from the west with solar arcs for May 29th and July 12th dropping into view just at sunset

Looking straight down 34th Street from the west, the shadow bubble shows dark grey where the sky will be occluded from view and blue in the narrow slot where it can be seen. When we add the solar arcs for the dates of this year’s two events – May 29th and July 12th – we see that they drop right into the projected shadow canyon at sunset on those dates. Beautiful.

The analysis made me wonder, however, why only sunsets are of interest in Manhattan. What about the sunrise dates? Turning the model around and looking from the east, I used Heliotrope’s “When” component to determine the dates on which the sun would align with the street grid at sunrise. I found that it does in fact align on January 10th and December 2nd, 2014.

From the east, on January 10th and December 2nd the arcs also line up with the street grid for a brief moment at sunrise.

From the east, on January 10th and December 2nd the arcs also line up with the street grid for a brief moment at sunrise.

Which leads me to further ask, why don’t New Yorkers ever mention these Manhattanhenge sunrise events? Is it possible that the weather is always bad on these wintery dates? Or is 7am too early for our modern day druids to hit the streets? One does wonder!

Normalized Sliders in Grasshopper

March 24, 2014

Here is a simple trick I find new Grasshopper programmers often miss out on: Normalized Sliders.  Out of the box, Grasshopper provides a 0-1 slider.  Typically, after placing one in your program, the next step is to right click onto the slider, set its minimum and maximum values, set integer versus real range, etc, etc.  Any subsequent changes require going back in and changing those settings by hand as they cannot be updated automatically.  Furthermore, the settings are not visible to the user without deliberately accessing them again through the right-click menu – an easy to forget update.

At the top level, my normalized slider is converted to a 10-20 value range through the Normalizer component

At the top level, my normalized slider is converted to a 10-20 value range through the Normalizer component

Sliders are a critical feature in Grasshopper and I would love to sit down and develop a set of more sophisticated ones that better reflect my needs, but life is short. In the meantime, here is what I do: simply use the out-of-the box 0-1 slider – without any changes – and normalize the range over which it creates my actual parameter. Its a simple operation: if my range is from 10 to 20, I take the difference (20-10), multiply by the 0-1 slider value (say it’s at 0.47, so we have (20-10)*.47=4.7) and add that to the minimum number again: ((20-10)*0.47)+10=14.7  Wallah!  My normalized slider is now controlling a range between 10 and 20. The min and max values are visible to my program and can be driven by other program inputs.  With some cleverness and a little care at the endpoints, we can round the output to create integer values over the range or even valued numbers in the range, etc. This technique is particularly useful when setting up animations, when there is often a lot of back and forth between tweaking the range of values over which the animation will run and testing the animation results.

The top level screenshot above and the detail below show the calculation boxed into a “Normalizer” component for easy reuse.

Inside the Normalizer component, a simple calculation is used to generate the desired range value from the 0-1 input.

Inside the Normalizer component, a simple calculation is used to generate the desired range value from the 0-1 offset input.

Note that the Min value input to the component is first passed through a “Number” object and then fanned out to the two locations it is used in.  This allows the component to have a single input for “Min” connected internally instead of two which must be connected externally. Remember, the key to successful programming is to eliminate as many mistakes as possible before they happen instead of tracking them down later!

Initial Experiences with OpenStudio and EnergyPlus

February 11, 2014

As I mentioned in a previous post, I’ve begun to integrate EnergyPlus modeling into my workflow, using NREL’s OpenStudio to interface to it.  I’ll talk more about my current projects in future posts, but while I’m in the thick of it, I thought it might be worthwhile to give a few initial impressions of OpenStudio v1.1.0 coupled with EnergyPlus v8.0.0.

The Department of Energy (DOE) developed EnergyPlus in the 1990s with the goal of combining the best of the preceding DOE-2 and Blast efforts. Energy simulation engines are big complex pieces of software requiring larger research and development funding than was available through commercial channels. This excellent article shows how the oil embargoes of the 1970s triggered government interest in building energy modeling. And yet as we all know from the 2030 Challenge, building operations still consume way more than their share of national energy expenditures.

EnergyPlus is a simulation engine with a crude 1990s era text-based file format interface.  As it says on DOE’s about page:

“EnergyPlus is a stand-alone simulation program without a ‘user friendly’ graphical interface. EnergyPlus reads input and writes output as text files. A number of graphical interfaces are available.”

The text interface is actually great news, because if you want to examine precisely what is going into and out of the simulation program, with nothing hidden, you can look into those text files and see all the nuts, bolts, bells and whistles. The graphical user interfaces (GUIs) like OpenStudio generate those text files, hopefully with a more-user friendly interface. We see this in daylight modeling tools as well, where Lawrence Berkeley National Laboratory’s Radiance simulation interface has spawned a number of GUI tools to better facilitate creating the complex text-based input files. It is actually funny that the human readable text-files are so complex that they are, well, difficult for humans to read. But when it gets down to the simulation itself, you can always check the details of what is going into the simulator by looking at the text based file generated by the GUI.

Okay then, the goal of a GUI for EnergyPlus is to provide a more convenient way for users to create models. Unlike the energy simulators themselves, they cost a lot less and are much easier to develop. In fact, there are several commercially available GUIs already developed for EnergyPlus. The links to them are right there on the web page. So why OpenStudio? Ordinarily the government does not stifle commercial development by competing directly with it,  so it’s a little unclear to me why OpenStudio exists.

Currently, I have no answer to that question. It’s possible that because the government labs such as NREL are also users, it seemed wasteful for them to have to buy commercial interfaces to their own underlying freely available simulation engines. Or perhaps as expert users they have found the commercially available options insufficient for their needs. Or maybe the cost of the commercial tools is still creating a barrier to wider adoption of energy modeling as a part of the design process. Whatever the reason, over the last few years NREL developed OpenStudio as a freely available GUI to both the EnergyPlus and Radiance simulation engines, utilizing SketchUp as a modeling interface.

But don’t let the SketchUp part fool you fool you into concluding that OpenStudio is either a) not very sophisticated or b) something that you can plug your existing SketchUp design modeling straight into.

The basic hotel example provided by OpenStudio.

The basic hotel example provided by OpenStudio.

As for (a) sophistication, SketchUp is being used to construct the Building Envelope Model (BEM) required for energy simulation. Most of the annotations available to define materials and HVAC systems and whatnot are all still there.  The construction characteristics of each surface must be defined and added to the model, along with temperature set points for the thermal zones and usage characteristics, etc.  There is plenty of room for parameters, materials, set-points, etc.

As for (b) simply plugging in your SketchUp massing model with no further thought, the OpenStudio interface groups room boundaries together and assigns the surfaces to pre-defined layers which denote specific conditions – ground contact versus outside air – for example. Given the way that elements in different groups interact in Sketchup, I found it extremely difficult to create the zone groups from an existing model and much easier to start from scratch alongside it, while using the existing elements as quick references for scale and dimensions.

SketchUp, however, excellent choice for BEM creation because it inherently limits the user to the planarized surfaces required for the energy modeler. EnergyPlus doesn’t understand NURBS, so regardless of how free form your actual building is, you are going to have to planarize it.

Once you have your building thermal zones massed out in SketchUp, the next step is to refine the annotation details using the OpenStudio interface. Here’s where I start to run into difficulties. While on the surface the GUI appears to satisify the goal of a nice, clean user interface, a little below the surface it begins to lose its integrity. First, OpenStudio requires that you start with a building template and there are 16 of them provided for various ASHRAE standard building types. Unfortunately they all come preloaded with a number of predefined construction types and materials that may not suit your needs and become confusing when you are trying to sort out what’s being used and what’s not. I found no barebones template reflective of the simple energy modeling examples provided by EnergyPlus itself. Remember those text files that I mentioned could be examined to verify all the details of your model? OpenStudio refused to import the provided EnergyPlus example files and, worse, when I tried to strip a template of the extraneous materials, I quickly arrived at the point where OpenStudio could no longer run the file, without giving a clear explanation of why, except that it was missing elements that did not appear to be required by the energy modeler itself.

This takes us to the next problem, error handling. An OpenStudio error when constructing a model takes the form of refusing to do something without telling you why. You can’t drag and drop a material into the wrong place, which is good. But it also won’t give you a clue as to why, which is bad for coming up to speed. Meanwhile, if you do create a model that EnergyPlus doesn’t like it is only possible to debug by digging out the appropriate text-based log file and deciphering a cryptic message from the simulation engine, a message that you are less familiar with because you are using the GUI to distance yourself from that very model.


The GUI becomes confusing when elements appear in multiple locations without a clear distinction of why they differ.

The GUI itself could use some additional work as well. Again, at the top level it is fairly clean and reasonably well organized. Drag and drop works okay, but is often not needed when a simple double-click-select would do. Once inside some of the tabs, however, things become a lot less clear. On the construction tab for example, there are three different columns where materials appear, one labeled “library”, one “model” and one for…? I guess it’s for materials assigned to actual objects in the model. Honestly I’m still sorting this one out. If what’s in the model file is more than what is assigned to actual objects (which it appears to be because of those templates with all sorts of extraneous materials included), why are they included in the output files instead of being left in the library? What is the difference between the library and model columns if the materials in the model columns are not those assigned to actual objects?

Conclusions? OpenStudio v1.1.0 provides a powerful mechanism for creating input models for EnergyPlus, but there is still a great deal of getting up to speed required and it does not eliminate the need to understand the complexities of the underlying EnergyPlus models. I was especially frustrated by the lack of a basic, empty model with no extraneous, unused parts and I hope that will either be remedied or that someone will point me to it if one exists. All that said, if you put the time in to become expert with it, OpenStudio provides a reasonable avenue to access the EnergyPlus simulator. I hope we will see many future refinements but, in the meantime, it remains a powerful engine that I am happy to be putting to work now.

ArchDaily, Folk Art and Moore’s Law

January 17, 2014

450px-Folk_art_museumIts a rare thing to combine these three seemingly unrelated topics into one post, but today’s ArchDaily article on the upcoming demise of Tod Williams and Billie Tsien’s beautiful American Folk Art Museum in NYC does just that. It compares architectural obsolescence to Moore’s Law, Intel founder Gordon Moore’s statement about projecting technology growth. Like all who love architecture, I also have opinions about the museum; but it’s as a computer scientist that I am jumping in with a response.

The importance of the Moore’s Law premise that technology doubles in density approximately every two years is that it identifies an exponential growth rate. Amazingly enough, Moore’s projection has held true in high-tech for over four decades, although his law is so ubiquitously used in planning for high tech growth, it may be guiding, as much as measuring it. Nevertheless, exponential growth is a scary thing to deal with. Witness, for example, our fears about population growth. Such growth rates are typically not sustainable in nature, as factors such as global food supplies kick-in to limit the earth’s carrying capacity for human beings.

How is this related to the Folk Art Museum? ArchDaily references Mimi Zeiger’s DeZeen op-ed where she opines that the high-tech growth curve has a direct impact on our society. This year I upgraded my iPhone immediately when my 2-year contract expired, not because I was unhappy with the hardware, but because experience told me that I would soon become unhappy as apps and operating systems moved forward to take advantage of the ever increasing hardware capacity available in the new phones. That hardware capacity is the subject of Moore’s Law. Everything else is cultural fallout.

ArchDaily extends Zeiger’s statement about technology’s impact:

“But in the case of architecture it seems there may be a variant of Moore’s Law at work. When one architecture is swapped out for another architecture, what replaces the original can be larger and more dissipated. It is the Folk that is smaller and more powerful…though not powerful enough to alter the course of MoMA’s board and DS+R’s mouse clicks.”

Okay, hold the presses. Let’s be careful with pushing analogies too far. Architecture is not growing at an exponential rate and even the rate of cultural change imposed by technology growth cannot begin to justify demolishing the Folk Art Museum after a mere dozen years of existence. This is not a variant of Moore’s Law, this is a function of the economics of the New York real estate market and a world-class museum powerhouse such as MOMA steamrolling ahead despite the admitted cost.

I cannot second guess MOMA’s decision, but neither will I excuse it with a loose application of a measure best left to describing decreasing transistor line width. We are fortunate that construction and demolition is not an exponential growth process and that even in our deadline-driven industry there is still time for thoughtful approaches to design and planning. Edward Mazria’s 2030 Challenge tells us of the great opportunity to create a sustainable future by improving our design processes and standards. Critics of unthoughtful green design such Sim Van der Ryn and his Design for an Empathic World: Reconnecting People, Nature, and Self provide additional arguments for the unsustainability of embodied culture on top of embodied energy. Whether or not it makes economic sense, the tragedy of the demise of the Folk Art Museum is that MOMA, a museum honoring modern design and culture, could not find a way to honor one of the finest examples of that culture found just across the street from its own front door.

Tapping into Virtual CPUs

January 14, 2014
Amazon EC2 login screen is the starting point for virtual access

EC2 login screen is the starting point for virtual access

To start the new year off right, I’m generating a series of blog posts about my experiences with various tools, what’s working and what isn’t. Over the holidays I experimented with NREL’s OpenStudio energy and daylight modeling tool, using Amazon Elastic Compute Cloud (Amazon EC2) in the process.

The first step for me was setting up a virtual server on the Amazon EC2 cloud. Why? As an independent designer and software developer I am often called upon to explore new tools and capabilities that might turn into work, or might not. Yet all of them require installation. Of course we all know how dangerous it can be in the PC world to repeatedly install and uninstall software. Sure enough, an architect with whom I work experienced significant installation problems with OpenStudio when he installed it for a workshop a while back. If such problems are to plague me, I want to make darn sure they don’t knock out my primary workstation for any period of time.

In addition to wanting a clean slate on which to practice, I like the idea of having an expandable resource base in order to increase my hardware resources as needed. I’d also like to run longer tasks or multiple tasks without conflict; log in remotely from anywhere with my laptop to the active machine, without using LogMeIn or establishing a vpn server in my office space; and provide resources to additional developers when available; all on a pay-as-you-go basis.

Setting up my Amazon account was easy and obvious. Amazon offers the smallest “t1.micro” class free for one year, which I found was a great way to get started. I don’t have to worry about the hourly cost of my learning curve and it gives me a reference point against which to compare performance needs later on.

To start, I installed SketchUp and OpenStudio on the t1.micro and began to exercise the programs, but I found the response time of SketchUp to be pretty bad. Stopping the instance and restarting on a larger configuration, an m1.medium, corrected that problem.  The cost of running an m1.medium is very small, but it is also trivial to switch between configurations. I switch back to a free t1.micro instance during slow times, such as when I’m wading through the OpenStudio tutorial videos. (With a dual screen workstation, I run the tutorials in a browser locally on one screen, with the remote desktop to my Amazon instance open in the other.)

Overall, the experiment has been a complete success. I appreciate not having the additional software installed on my workstation until I’m ready to commit to it in the longer term. I now have an initial machine configuration up and running on the cloud for quick expansion when I need it. Although there were some tricks to the initial configuration, none was too difficult. It’s a great way to have extra resources sitting on the shelf to access when I need them. If the virtual hardware trade-off works, perhaps the next thing to take a look at is renting software!

20 Fenchurch Makes the 5 Worst List for 2013

December 31, 2013
Temporary mesh screens applied to the south face of 20 Fenchurch to reduce glare reflections.

Temporary mesh screens applied to the south face of 20 Fenchurch to filter reflections.

The Telegraph has listed Rafael Viñoly’s 20 Fenchurch as one of Britain’s 5 worst architecture projects of 2013. This is the building whose solar convergence formed by its doubly-curved facade melted the plastic off a nearby Jaguar last September. With subsequent demonstrations of eggs frying and photos of heat fractured tiles in neighboring buildings, you can well imagine the British press’ delight in reporting on their new “fryscraper.”

But seriously, one has to wonder about the thinking behind such an egregious design error, particularly in “light” of Viñoly’s previous deathray design at the Vdara Hotel in Las Vegas. With the Vdara’s well-earned renown as a case study of the perils of concave south-facing glass facades, how could the same architects have made the identical mistake? Were they fooled into thinking that London’s cloudy weather could mitigate the problems that resulted in Vegas? If so, they were wrong. Temperatures inside a black bag left on the street below 20 Fenchurch were recorded as high as 198 degrees Fahrenheit.

And it doesn’t require a complex double concave face to create a solar convergence problem. We’re seeing reports that reflections from the slanted flat face of Richard Rogers’s new Cheesegrater Tower in London are also blasting neighbors.  Even the more traditional, oval shaped Museum Tower in Dallas glares down on Renzo Piano’s beautifully day-lit Nasher Sculpture Center with devastating results.

It leads me to wonder: what would Gehry’s Disney Concert Hall look like if the solar reflections had been considered earlier, before it was necessary to sandblast the polished titanium surfaces to reduce their reflectance? Would the building be a different shape? Different color? Different texture?

Let’s make 2014 the year of thoughtful solar design. Now that we’ve proven we can build any shape we darn well please, let’s start thinking again about how buildings relate to their environment and context. Let’s make it a year in which light pollution is considered when it ought to be – during the design process! Rather than fixing our masterpieces with expensive retrofits of window films, mesh screens or shades, let’s eliminate buildings that actively harm their surroundings and make 2014’s worst architecture list a discussion about taste.

AU2013 Day 1: The Hackathon

December 4, 2013


CASE Design successfully pulled off its first BIM hackathon last evening, and a fun evening it was! There may have been more socializing and beer drinking than hacking happening, at least around my laptop. I was particularly happy to meet Justin Botros of ProjectFrog, a San Francisco-based firm that designs and constructs modular prefabricated buildings, or “prefab component” buildings. As Justin explained, the term “modular” carries too many connotations of cheaply made accessory housing units. ProjectFrog does a lot of work for schools, a nice step up from building temporary buildings without breaking the bank.

I had explored the ProjectFrog website while doing my pre-conference homework so I was glad to have the chance to meet someone from the firm. I have long been interested in high-quality prefabricated construction, especially the work of Dan Rockhill’s Studio 804, and the economic thinking of KieranTimberlake, both in their built projects and in Stephan Kieran’s book, Refabricating Architecture. Despite the issues that Justin mentioned – building codes and construction unions not ready to make the switch – offsite fabrication makes a great deal of sense.

This morning, Jeffrey Vaglio of Enclos presented another spectacular prefabricated project, a suspended solar reflector element at the Fulton Street Transit Center in NYC. Stay tuned for more on that session in the next post.

AU2013 Day 0: Dynamo/DesignScript Workshop

December 3, 2013

I spent my first day at Autodesk University 2013 in a workshop devoted to new visual programming tools Autodesk is developing. Literally. Developing as we speak. Demo versions were compiled the previous night, and it showed.


Not quite “programming for non-programmers”

Autodesk continues to sell “programming for non-programmers.” Look at the amazing things you can do without writing a single line of code!

First, it isn’t true. By mid-morning  we were seeing plenty of text-based imperative programming code. Second, there’s nothing wrong with that. I completely understand that programming represents a new kink in the traditional design iteration loop. But that doesn’t mean designers shouldn’t step up. It is frustrating, however, to see the patronizing way that Autodesk caters to the masses, emphasizing the gulf between their products and their clients. In fact, the gap just might be the natural outcome of a software company protecting its territory. We are developers. You’re not. We won’t pretend to understand what you do, if you don’t pretend to understand what we do.


The code abides

I find it particularly interesting that sometime between two months ago when I signed up for the DesignScript workshop, and last night, when they finished compiling, Autodesk made a major course correction: DesignScript and Dynamo merged.

Autodesk is vehement that DesignScript is great, that they can do great things with it because they own and control it. But the workshop examples don’t support that contention. There are some nice syntax hacks and the language is inherently functional – or as Autodesk terms it, “associative” – as opposed to imperative. But the value of that subtlety is likely to be lost on most designers. Many of the code examples presented explicitly overrode to an imperative mode.

And there was this bizarre response to a question about Python code-block nodes in Dynamo: we’ve provided multiple inputs but only one output because of language limitations. They then proceeded to show Dynamo nodes with multiple outputs as a comparison. Come on now.  No multiple outputs because it’s Python? I don’t think so. There’s no reason you cannot send an intermediate value to an output as well as a return statement. Perhaps what they mean is that they cannot automatically identify and publish all variables the way the DesignScript nodes do. A different variable identification mechanism would solve that problem with no trouble.

So the long and short of my first full day at Autodesk University? There is a lot of good work happening around Dynamo and there is a lot of hand waving going on regarding DesignScript, especially how it works with Dynamo and where it will go in the future. For now, if you are afraid of learning to program, with these tools you can still pretend you’re not.

Choosing the Right Programming Tools in CityEngine

November 26, 2013

In last week’s post on data driven visualization using CityEngine I showed that the results required using a variety of programming tools and techniques. No one programming environment or tool – traditional ArcGIS, CityEngine’s native “Computer Generated Architecture (CGA)” shape grammar, or direct Python scripting in CityEngine – was sufficient to accomplish all the tasks required. Ultimately, data-driven visualization requires a developer to be fleet of foot, recognizing which tool applies when, and implementing each to its best advantage.

GIS information is imported as shape layers.  This layer shows the lots that have been identified as underutilized.  On the right hand edge the object attributes panel shows additional GIS annotations for one particular lot that has been imported along with its shape.

GIS information is imported as shape layers. This layer shows the lots that have been identified as underutilized. On the right hand edge the object attributes panel shows additional GIS annotations for one particular lot that has been imported along with its shape.

ArcGIS is very powerful at manipulating the 2D geometry up front and then importing that data as annotated shapes into the CityEngine environment. Furthermore, Portland Metro has GIS experts more than ready to manipulate data by hand much more quickly than I can turn it into a stable program. Of course the goal was to see how far we could automate and replicate the process, so minimizing our dependency on manual processing was important as well.

Python programming creates routines using traditional imperative or functional programming techniques. The "Redevelop Routine" maps itself to the individual lots in a list, demolishes the existing buildings on a lot, and generates new building models based on its development type.

Python programming creates routines using traditional imperative or functional programming techniques. The “Redevelop Routine” maps itself to the individual lots in a list, demolishes the existing buildings on a lot, and generates new building models based on its development type.

CityEngine’s Python environment is sparse. There is little geometry support, so even determining simple relationships between shapes takes extra time to get right. Importing a good third-party geometry package will help greatly in future work. Debugging support is minimal. I found it frustrating to repeatedly track down small syntax errors by commenting out large chunks of code and adding them back in bit by bit. Ultimately my routines were filled with print statements saying “You made it to here!” But Python gives the ultimate flexibility. While I find indentation as syntax crudely reminiscent of Fortran and punch cards, Python’s list handling and comprehensions are elegant and fun to work. They are also tailor-made for manipulating the large sets of geometry shapes. Python also allows me to import and utilize outside file based data sources such as the comma-separated-value files I used to map development types to  building types. This is an important feature as it allows a future non-programmer to adjust and iterate through different data configurations without having to hack the code at each step.

This images shows a CGA rule file on the left hand side.

This images shows a CGA rule file on the left hand side.

Finally, CityEngine’s CGA shape grammar is by far the best environment for creating iconic shapes used to represent building types.  CGA rules also provide slick mechanisms for recursively processing shapes and selecting alternative rules based on percentage probabilities, which are not easily implemented in Python.

Completing Data-Driven Visualization in CityEngine

November 19, 2013
Planning grid overlaid with tax lots marked for potential redevelopment

Planning grid overlaid with tax lots marked for potential redevelopment

In an early September post I described a project for Portland Metro and the University of Oregon to implement data-driven visualization using ESRI’s CityEngine.  I completed the work at the end of October and wanted to share some terrific results.

The goal was to create a 3D model of an urban landscape similar to the simplified physical site models architects use, and to compare alternative development strategies using a real place and real development data. Starting with projected development patterns for the City of Portland’s Gateway District, we set out to show how the assigned development types would affect growth over the projected period, and to visualize alternative development choices. The progression of images here shows the action of the algorithms implemented.

Initial conditions mapped onto grid.

Initial conditions mapped onto grid.

First, we imported tax lot and building footprint information from Portland’s traditional GIS map sources. On top of that we overlaid a 264 x 264 foot grid used for development planning purposes. Each grid cell was annotated with one of sixteen proposed development types: Residential, Office, Light Industrial, etc., differentiated by color. To show initial conditions, I then created iconic building types on existing building footprints. Black buildings represent structures on under-valued properties identified for redevelopment. White structures represent buildings likely to remain intact. I based approximate building heights on Lidar data.

Using Python inside CityEngine, I mapped the lots to their containing development grid and development type. An Excel spreadsheet mapped each development type to a mixture of 6 building types. Sorting the tax lots by their valuation, I split them into three sets and “redeveloped” each set by deleting the existing buildings, sub-dividing the largest lots, and assigning an appropriate building type based on job and housing goals for the particular development. Using both color and form, I ultimately created iconic models representing each of the building types on a lot.

Gateway District shown with 66% of its targeted lots redeveloped

Gateway District shown with 66% of its targeted lots redeveloped

Employing the generative algorithms, it is simple to enter different data into the spreadsheets and update the resulting growth patterns under alternative scenarios. The final image above, for example, shows how the Gateway District might look if 66% of the lots identified as “underutilized” by the City of Portland were redeveloped.