Those of you who know me know that I am very proud of the four and a half years I worked at the U.S. EPA’s Office of Water, first as an Oak Ridge Institute for Science and Education Fellow and then as an environmental protection specialist and economist. Fewer of you will know that every major effort I worked on as a federal employee more or less failed. This was not due to any failings on the part of the people there, the staff and management remain one of the smartest and dedicated groups I have ever met. And even the undeniable budgetary and partisan political troubles can not take the full share of the blame. No, our failures stemmed from how poorly institutions and organizations set up in the 1970s are suited to deal with 21st century environmental problems.
So what I took out of my time at U.S. EPA were the lessons of failure. And nothing taught more lessons than the high profile failure of the Stormwater Rule.
Taking dramatic action to deal with the damage to U.S. waterbodies caused by stormwater running off buildings and roads wasn’t a bad idea. Dramatic changes were necessary in how stormwater was managed and regulated in the United States to prevent further degradation of said waterbodies. The U.S. EPA had more or less left the issue to the states for many years and while some had made significant and exciting progress, many were doing virtually nothing.
Our specific regulatory approach itself wasn’t bad or uninformed either, it had been developed by some very smart people with decades of experience with the issue area itself and the nuts and bolts of how to get things done in Washington, D.C. In many ways we were implementing a tried and tested U.S EPA approach to environmental regulation: let the states experiment, and then use the lessons from that experimentation to set a minimum technical standard that the states that were straggling behind would have to meet.
No, the problem was the institutional context in which we were operating, which as I noted above, was still tuned to fix the environmental problems of the 1970s where the problems could be easily traced to some big building with a pipe spewing gross stuff that killed the fish.
Before U.S. EPA management and the White House could come to a final decision about the proposed regulation, however, the agency was going to have to go through the full regulatory process, which means that even the most sensible sounding and pragmatic policy (such as we thought ours was) has a lot of hoops to jump through before it becomes law.
This is because decisions, particularly decisions by public authorities, often have unintended consequences and attempts to solve a problem in one area may lead to frustration, misunderstanding, and even adversely affect public health, the environment, and people’s livelihoods in other areas. The solution is to create an opportunity for affected and interested parties to make their voices heard, and in the U.S. a federal requirement that authorities include the public in their decision making process has existed at least since the National Environmental Protection Act of 1969. And “public participation” requirements in some form or another are pretty commonplace in Europe as well, as I would imagine they should be in any nominally democratic society.
And we can imagine the ideal public participation process. New information is brought to the public authorities, they reconsider and redesign their policy appropriately, and the harms the policy would have caused are avoided or at least minimized. Perhaps the process takes a bit longer, and the final policy costs a bit more, but overall society is richer, more equitable, and just plain better off. But how often is this ideal realized?
I can’t say exactly. On the margins, when minor tweaks to the policy are required, maybe exempting some small population from the regulation or giving them extra protection, say not requiring businesses below a certain size to fill out a lengthy report to the government, or putting some extra resources into inspecting potentially toxic substances in schools, public input and participation can be very effective.
But when the changes to the anticipated policy become more than little tweaks, simple participation may not lead to any meaningful improvement of the policy, and may just lead to confusion and distrust. Some problems just aren’t that easy to characterize or solve, and the public authority may very well find its attempt to solve the problem characterized as being a problem in itself. In these “Wicked Problems” even formulation what the problem is, is a problem, often problems are not understood until after a solution has been formulated.
Let’s say for example you have too much waste piling up for the city dump to handle. The problem may be that the dump is too small, that you need a recycling program, or simply that the citizens should not consume as much stuff (NRC 1996).
As stakeholder groups (waste haulers and environmental activists in the above example) are likely to have varying solutions, they likely won’t be able to agree on what the problem was in the first place.
So we moved forward with our formulation of the solution (which was tied to our formulation of the problem, i.e. the need for U.S. EPA to set a standard) for the damage caused by stormwater runoff from streets and buildings. Our solution being a regulatory requirement that would require newly developed and redeveloped land to try to mimic natural hydrology. We believed that this could be implemented in a relatively cost effective manner as we had seen a lot of very attractive projects which had used green infrastructure such as rain gardens and green roofs and new “low impact development” techniques to accomplish just this. Some had even saved money and seen higher sale prices because, well, people like plants.
We had listening sessions across the country where people commented on our proposed changes to administrative law. Some people liked our program, some didn’t. And many just brought a lot of new information to the table. For example, a major barrier to the use of green infrastructure was that it wasn’t allowed under local building codes. For something like this we could issue guidance and allow local authorities some time to revise these codes. Other issues weren’t so easily resolved. For example, we were only looking at regulating new and redevelopment within urbanized areas, what if this perversely pushed more sprawl as developers sought to avoid the costs of regulation? And could the new techniques, which infiltrated the rainwater and anything it was carrying into the ground lead to groundwater contamination? And how burdensome would this be for the construction industry, would it substantially increase home prices?
These questions and many others were debated inside and outside of the U.S. EPA for the entire rulemaking process, and many of the debates go on even though the rule itself died in March 2014. And the process and the extensive information gathering, modeling, and analysis that were connected to it were not able to build any kind of broad consensus for or against the regulation we were proposing.
Which sounds funny. Having collected and analyzed all this information we should have “known” more and so it should have been clearer whether the regulation was a good or bad idea. But we hadn’t dealt with the fact that we had a wicked problem on our hands, or rather we were trying to deal with one aspect of a whole host of interconnected issues relating to urban development, and we couldn’t convince stakeholders that we could pursue our goals without stepping on their toes and causing problems in the issue areas they cared about.
Now, as I learned in our efforts to make sense of what happened, the problem of building consensus and understanding around government actions which involve difficult to interpret science and political disagreement was not unique to this effort. Back in 1996 the National Research Council (NRC) took a stab at addressing similar issues which had been encountered by a number of U.S. entities, including the U.S. Department of Defense, Health and Human Services, Agriculture, EPA, the U.S. Nuclear Regulatory Commission, and American Industrial Health Council, and the Electric Power Research Institute in its report Understanding Risk: Informing Decisions in a Democratic Society.
The thinking in the report remains state of the art today and it contains many concrete examples of where the process to inform the public and develop the appropriate scientific analysis was done correctly, and where it was done poorly. If the recommendations of the report had been implemented broadly at U.S. EPA, we would have been far more likely to have avoided the expensive and very public failure that was the process to establish a Stormwater Rule.
And process is the key word here. The final rule could be challenged in court if we did not comply with certain federal requirements which included when you have to publish notices in the Federal Register, how long comment periods need to be open for, what additional statues have to be considered, and what analyses have to be conducted. Where we fell down with the Stormwater Rule is that we formulated the broad outlines of the problem, and the solution, relatively early, and then treated the (admittedly substantial) process requirements that had been given to us as boxes to be checked, or hurdles to be jumped, on our way to a final decision. Don’t get me wrong, we did listen and conducted the elements of the process with complete sincerity, but how open we could be about changing our approach was limited by the fact that we knew basically where we wanted to go, and we didn’t have much time to get there as we started work in earnest in 2009 and planned to propose by the end of 2012. That is a very very aggressive schedule for a government action of the size we were conceiving.
In the above mentioned report (Understanding Risk) the NRC is emphatic that the process must be gotten right (NRC 1996, p. 22), and the process element which the NRC warns most strongly against using without broad based deliberation was exactly the element which killed our rulemaking, that is the cost benefit analysis (NRC 1996, p. 104) and the many “judgements that are implicit” in the techniques that makes such an analysis possible.
Now the superficial reason that the cost benefit analysis killed the Stormwater Rule was that costs were higher than benefits, and no doubt having higher benefits than costs would have strengthened the political hand of U.S EPA management and made them more likely to issue the rule. But getting high “monetized benefits” around water issues is notoriously difficult, and many regulations have been issued in which costs were much higher then benefits (I recall a rule of thumb for water rules being that costs should be no more than three times benefits).
The reason that federal agencies are allowed to issue regulations is even when a cost benefit analysis is unfavorable for taking an action, it may still make sense to take the action. This is because the meaning of costs and benefits in the context of a formal analysis is deceptively narrow and technical and understanding precisely the final numbers of any given analysis requires not only a significant background in economics but some familiarity with how methodologies were implemented in that specific study. In the environmental context, these terms are even less intuitive because one must rely on “non-market valuation” which depends on a very specific and even more technical body of research.
To cover quickly how “cost” and “benefit” are used in everyday speech as opposed to in a formal analytic setting, let’s consider that while many of us consider increasing equality to be a “benefit,” there is no way that equality can be considered in the framework of a cost benefit analysis . Other qualitative factors of interest such as sustainability, resiliency, or innovativeness, might be translated into monetized terms in a cost benefit analysis, but it is likely that after the translation process such terms would only bare a tangential relation to our everyday usage of the terms, and thus the analysis would be likely misleading in broader debate.
The point is: cost benefit analysis is only useful if decision makers and stakeholders understand how to interpret the results. And when it comes to complicated environmental regulations this is rarely if ever the case. Air regulations often have much higher benefits than costs but this is because small particulate matter is likely to lead to human deaths. Now it does seem that avoided deaths should translate into HUGE benefits, obviously human deaths are first and foremost what we wish to prevent, but discussing human death in a cost benefit analysis requires putting a monetary value on human life, in the case of the U.S. government standard values are between $7 million to $9 million per human life (these come from very technical studies about how much money Italian miners demand for more dangerous work). Check your intuition on these numbers for a second. Then consider that simply multiplying one of these numbers together by the number of expected deaths likely forms the core of the several billion dollars in potential benefits reported in the news regarding a big new air regulation.
If you have any skepticism about placing a value of $7 million to $9 million dollars on each individual human life, you probably would have preferred to look simply at the number of people likely to die without the new regulation. Then you might be able to make a judgement call based on what you thought was a manageable cost for industry. The problem we are now faced with is that we as a society don’t have a nice analysis to make the implicit judgement for us. What you decide is “manageable” will say something about your judgement, and thus also about you, and your judgement is likely to be inextricably linked to whether your child suffers from asthma or you have to install the control technology on your coal plant, whether you’re a Greenpeace volunteer or a Koch Industries lobbyist. Very quickly what seemed to be an entirely technical matter has become a political one.
And here are the Scylla and Charybdis which any decision making process which is at the same time scientific and yet political must steer between.
Overly technically and analytically oriented decision making processes threaten to confuse and obscure understanding of relevant stakeholders, the public, and even the decision makers themselves; overly politically oriented processes may never be able to arrive at a consensus on the facts at hand. But in a well tuned process, consensus and understanding go hand in hand, just as technical analyses can support and inform political deliberation.
Now designing and carrying out such a process is more of an art than a science. But this is an art that can be informed by prior cases, as the “Understanding Risk” report shows. An example of a successful effort comes from the Man and Biosphere Program (MAB) which was organized by the U.S. Department of State. I assure you, given the scope of the issues involved, success was not assured:
In a MAB activity over several years, more than 100 natural and social scientists from various federal and state agencies and from universities have considered policy options for managing surface water so as to maintain a sustainable ecosystem in and around Florida’s Everglades (Harwell et. al., in press). Changes in the ecosystem and possible responses to them entail risks to endangered species, to drinking water quality in nearby metropolitan areas, and to the livelihoods of sugar growers. The scientists considered all these risks carefully, but from a perspective different from that typical to risk assessments.
They defined the problem not as one of estimating and reducing risks, but as one of developing a shared vision of desired conditions of the ecosystem. The then identified development strategies consistent with such a vision and proposed governance structures that could adaptively manage the social-ecological system and it changed and new knowledge developed. They considered several scenarios for change in human management of the ecosystem and analyzed them in terms of their compatibility with goals of sustainable economic and social development and with a widely shared vision of ecosystem use. The MAB effort is noteworthy for its problem driven approach, particularly its extensive and explicit efforts to understand the decisions to be made, rather than presuming that decision makers would gain the understanding they needed from estimates of the ecological, health, and economic costs and benefits of previously defined choices. In fact, the process generated policy options that had not previously been considered and might be more acceptable, both socially and ecologically, than any that might otherwise have been considered (NRC 1996, p. 18-19). (Emphasis added)
In addition to specific cases such as the one mentioned above, there is a good deal of general practical knowledge out there and agencies would do well to direct more of the attention to the “craft” of meaningful public participation. Such effort shouldn’t be seen as a distraction for scientific work to inform policy, but rather as a necessary step to ensuring that the data gathered and analyzed answer the relevant questions. And what the relevant questions are is never clear at the outset of a policy making process which involves a complicated mix of politics and science.
And those of you who know me very well or read my post at the beginning of the year, have heard of the huge potential I see in participatory modeling, particularly participatory modeling exercises aided by computer simulation. Such exercises have been shown to help stakeholders from diverse backgrounds to come to a common representation and understanding of the problem, and thus a common and trusted vision for a way forward. As I develop my work with an approach that uses these exercises, my experiences with the Stormwater Rule and stormwater issues more generally will certainly be foremost on my mind. And I invite any practitioners or researchers struggling with these issues to contact me through the comment form on this blog or at chris.moore [at] ecologic.eu
Stern and Fineberg. Understanding Risk: Informing Decisions in a Democratic Society. National Research Council. 1996. http://www.nap.edu/openbook.php?isbn=030905396X
Representative democracy has never been sufficient to formulate informed and legitimate legislation and policies. Elected representatives and governing institutions have always required help from civil society:
Civil Society 1.0: The town hall meeting. Everyone gets together in a room and debates directly with the elected representative. A robust and independent media is also indispensable.
Civil Society 2.0: In response to the growing complexity of society and policy issues a whole group of organizations grew up to inform policy. Think tanks, academic institutes, issue advocacy nonprofits, and industry associations, achieved power and influence by filling gaps and finding niches to influence policy through research, proposing and opposing policy, and representing core constituencies and providing issues expertise.
Civil Society 3.0: Cross cutting persistent issues such as poverty and emerging issues such as climate change adaptation require means to build consensus and collective action that overwhelm Civil Society 2.0 actors. The the experts and institutions for 20th century policy problems lack the flexibility, creativity, and legitimacy to address 21st century challenges unaided. Blogging and social media allow for new thought leaders and influencers to grow and online forums facilitate dialogue and organization. Increasingly, ideas will be crowd sourced (e.g. MIT Climate CoLab) and cross cutting issues will be addressed, not by building wholly new organizations and expertise, but rather by reorganizing existing actors, often those active in Civil Society 2.0. Increasingly important are “systems leadership” and “collective impact” facilitated by organizations (e.g. FSG and The Intersector Project) and embedded in “backbone organizations” (such as the Greater Cincinnati Foundation).
Design thinking has enjoyed incredible success in the for-profit sector and is increasingly extended to products and services aimed at positive social impact. To tackle systemic and persistent “wicked” problems however, design thinking will have to be enhanced with techniques from game design.
Where there are start-ups, there is design thinking, and the reason why is relatively clear. When developing a radically new product or service, there are a lot of risks and a lot of unknowns that mean even brilliant start-up ideas won’t make sense as an ongoing business concern. At least not as it was originally conceived.
Design thinking mitigates such risks, or at least increases the chances of revealing them before you’ve dropped millions of dollars in development. Empathizing with the user and maintaining a laser-like focus on his or her experience and developing rough and ready prototypes for feedback often means that either an idea fails quickly, in which case you move on to the next one, or that the final product or service offers exactly features and experience that the user wants.
The design thinking approach can be contrasted with the waterfall development, a model rooted in the age of mass manufacturing.
The waterfall model was originally adopted for software development, and the reasoning was also fairly clear: it follows far more closely our ideal of a rational course of action than design thinking. Figure out what the end user wants up front and then figure out the most efficient way to get it to them. Easy peazy, lemon squeezy.
The hard part is when the user doesn’t know what he or she wants up front. Or thinks that she knows, but changes her mind once she starts using it. Doesn’t like this feature, would really like to have this one. Or even worse, maybe there is just something “off.” Here design becomes less of a rational or linear process, but rather one of empathy, intuition, and experimentation, and this is where design thinking excels.
If we check out the design thinking’s Wikipedia entry we see it is associated with addressing wicked problems, “a class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers, and where the ramifications in the whole system are thoroughly confusing” (Rittel and Webber 1972). Wicked problems include those such as persistent poverty, which has proven to be more than a match for efficient manufacturing processes and many of our current environmental problems, which the successes of mass production played and continue to play their role in creating.
But while design thinking has shaken up the hotel industry through AirBnB and IDEO, and the leader in the practice is now moving to disrupt the remarkably resilient consulting industry, we have yet to see any exciting new businesses make progress on disrupting the problems we really care about, our persistent and evolving social and environmental problems. And while social impact accelerators and incubators exist and are producing all sorts of exciting companies and ideas, we have not yet seen social start-ups scale to make inroads on the pressing problems of our day, and even the most successful are currently not within any reasonable distance or disrupting or acting as a reasonable complement or alternative to our current, highly inefficient and unsatisfactory political and policy making processes.
The reason is that design thinking was developed primarily for developing consumer products, it gives the consumer what he wants. But there is no single product or service to ensure that water is used in a socially equitable and sustainable way, or to eliminate poverty, i.e. the problems that Rittel found to be truly wicked. Fortunately there was an approach developed explicitly for wicked problems, and it has been used with success by a community of practitioners working in renewable resource management since 2000.
The approach, ComMod, has an orientation that will not be completely foreign to practitioners of design thinking. Empathy is inherent, as is prototyping. The difference is the type of client it is aimed at, and the type of design problems it aims to take on. It is geared towards solving problems in social and ecological systems by assisting with the design of institutions, and thus by necessity it is equipped to deal with a higher degree of complexity and uncertainty than design thinking, as well cope with very diverse values among the clients, i.e the users and managers of the resource system. The client is not an individual consumer, the client is society, or rather the segment of society that is concerned about or involved with a particular wicked problem.
To deal with the complexity endemic to issues such as protecting biodiversity, and promoting best practices of land and water management, ComMod puts researchers at the center of the process. Clear decisions on such issues are of little use if they do not have an adequate scientific and technical basis. But as these are wicked problems, and thus are not clearly formulated, it is not clear from the outset where the researchers should devote their time. Different stakeholders will have different understandings of the underlying problem, as well as what the solutions are likely to be, and thus different takes on which factual matters need further research and which are irrelevant. If the researcher merely begins research without consulting the stakeholders, her work is likely to reflect her disciplinary background research interests which maybe be academically interesting and useful. But if she consults the stakeholders she gets a problem similar to the one our design thinkers deal with, stakeholders will be unable to give a common or coherent view of what the goals and requirements of the research should be.
And this is where the game design and prototyping come in. Starting with one stakeholder group the researchers are able to get one perspective on the underlying problem, and possible solutions, and then are able to represent that understanding of the system of interest through a role playing game. This artifact, the game, can then be played by other stakeholders in the system and critiqued and iteratively improved. Stakeholders with diverse backgrounds are now working from a common object, understanding the assumptions and important facts of others, and seeing their own views incorporated and critiqued.
They also engage with and “play” with the problem and each other, which can allow them to temporarily put aside their own frame of reference, and thus better understand those of others and become more open to creative solutions. This avoids the situation often endemic to wicked problems, where stakeholders continuously argue past each other. It also looks a lot like the “second generation” model of planning advocated by Rittel and Webber, one that is an “argumentative process in the course of which an image of the problem and of the solution emerges gradually among the participants, as a product of incessant judgment, subjected to critical argument” (Rittel and Webber, 1973).
Prototyping isn’t everything. The final research product to support decision making is accomplished by simulations based on the final prototype in a computer environment. But this research product is far more likely to be relevant to the important decisions at hand, as well as understood and accepted by the involved parties, because they played a role in creating it.
ComMod has been applied successfully in Africa, Asia, Europe, South America and Oceania, it has been used for issues surrounding agriculture, biodiversity, water scarcity and flooding, livestock, fishery and forest management. That it is not more widely used and known is likely a function of language, the approach was developed by French researchers (ComMod standing for Companion Modelling being a somewhat awkward translation from La Modélisation Comme Outil D’Accompagnement) and because the practicing community has remained largely in academia. Both of these are changing however, as new geographic settings bring new linguistic settings, and as private practice is increasing. Lisode is a private firm that serves French language clients, and Sim4Act, which will bring ComMod to English language clients (full disclosure: I am a co-founder of Sim4Act), is currently in its start-up phase.
These are welcome developments, as the increased use of ComMod is important for communities facing traditional wicked and commons problems such as resource scarcity and ecosystem management, or ones trying to grapple with new ones such as climate change adaptation. ComMod promises to be an important tool in the hands of policymakers and citizens trying to deal with politically contentious risk management problems where science is necessary but not sufficient to drive decision making. It fits neatly with the National Research Council’s (NRC) recommendations a number of U.S. entities, including the U.S. Department of Defense, Health and Human Services, Agriculture, EPA, the U.S. Nuclear Regulatory Commission, and American Industrial Health Council, and the Electric Power Research Institute in its report Understanding Risk: Informing Decisions in a Democratic Society. ComMod fits the bill of the “analytic-deliberative” process that the NRC recommends, and offers an excellent alternative to cost benefit analysis, an approach which the NRC openly warns against as being bureaucratically convenient, but often poor at informing the public.
In an era of persistent and multiplying wicked problems, it’s time to take extend design thinking’s successes based on empathy and prototyping our urgent social and environmental problems through game design thinking. It’s time to scale up from individual users to systems.
Rittel, Horst W. J. 1972a. On the Planning Crisis: Systems Analysis of the First and Second Generations. Reprinted from: Bedrifts Økonomen (Norway), No. 8, October, 1972. Reprint 107. Berkeley: University of California at Berkeley, Institute of Urban and Regional Development, as cited in Buchanan 1992
Rittel, Horst WJ, and Melvin M. Webber. “Dilemmas in a general theory of planning.” Policy sciences 4.2 (1973): 155-169.
Those with the power make the rules, and with a few notable exceptions, the rules usually benefit those with the power. This is a general truth in human affairs.
For those of us who live in democratic societies, this often means living with the contradiction of nominal equality before the law and in political power (“one man, one vote”) versus reality: the few who wield economic power have a disproportionate voice in the political process through their lobbyists, and extra protection before the law with the help of their high-end lawyers.
Requiring that everyone have a seat at the table, and thus a voice in the process is a common way to assure that actions in the public arena do not neglect society’s less fortunate. But even if the less fortunate are able to make it to the table, just a seat at the table doesn’t make you an equal, and certainly doesn’t guarantee that your interests will be represented in any final decision. Power dynamics have a way of playing themselves out around a table, too. Continue reading
I’ve done a lot of verbal explaining of agent based modeling and its power and potential over the last several months. And after a very interesting and productive meeting with Dr. Tanja Srebotnjak, formerly of Ecologic Institute and now the inaugural Hixon Professor for Sustainable Environmental Design at Harvey Mudd, I decided to put together a summary of my thinking on agent based modeling combined with a list of relevant sources. I’ve discovered a lot of good work, and my own thinking has substantially evolved since my first post on the subject in July of last year.
To start at the beginning I’d like to briefly address modeling more generally in science because the word is associated with very complex exercises by very smart and technical people which spit out results that are all but incomprehensible to most of us without abundant interpretation. Climate models (which often have agent based components) are one example, as are macroeconomic and trade models, and simulations of biological and ecological systems.
We should not, however, get too intimidated. At the end of the day, a model airplane, be it made out of paper or plastic, is just as much a model as the others. It may be used as a toy, but if it is useful for answering scientific questions then it’s a scientific model. None of us have problems understanding the basic thinking behind building a physical model and placing it in a wind tunnel for various tests. The model is not the real thing, it may be of smaller scale, be built out of different materials, lack certain internal components, but as long as it captures the features of interest, it is perfectly adequate. In the case of an airplane or car in a wind tunnel, it is adequate for various scientific tests to inform an engineering process.
The real discovery is the one which enables me to stop doing philosophy when I want to. The one that gives philosophy peace, so that it is no longer tormented by questions which bring itself into question.
2013 was fun. I quit my job, road tripped across America, sailed, skied and surfed in California, spent my summer learning German, and sipping Kölsch on the Rhine, and moved to Berlin. But 2014 was a far more important year for my personal, professional, and intellectual development. It was the year that I finally got my head fully around issues of science, politics, and philosophy that I’ve been contending with since I first got into democratic politics 12 years ago.
In the past 12 years I’ve found myself torn between two worlds of practicioners, who generally have two very different ways of looking at and interacting with the world, the political and the scientific. Your academic training in political science isn’t much help for working in the business of politics, this is something I learned quickly after arriving on Capitol Hill. You’ll find people far more generous and helpful at getting you started than you would expect from reading Machiavelli or rational choice theory, and yet the idealist will be quickly disappointed as well, no one has the time to reflect on how to apply Rawlsian principles of justice. Continue reading
I’m looking into a move about the start-up scene and in doing so ran into an “innovation network” that a former Ecologic Institute researcher works with called “What Would Harry Do?” The conceit of the network is a smart one: unleash your inner “Harry,” your inner five year old who “ate too much playdough” and has been producing “fun ideas and useful stuff” since. And they’ve got a cool collection of projects that have come out of releasing “human-centered” and “design thinking.”
But some of us had a fair bit of angst and Weltschmerz from the beginning. I don’t remember what it was like to be five years old but I have no memory of any uninhibited and pure time before rationality and responsibility . So we may do our best to “get” and play with the conceit, but we keep the five year old under supervision. We’re adults, we may appreciate and even envy the five year old’s spontaneity, maybe get access to a “younger” side of ourselves by observing him or her, but we still know best. Continue reading
I have two dreadful lines of thought with which I will bore poor captive souls. The first, and newest, is agent based modeling, which I discovered a little over a year ago, but which follows from work I began in grad school and which continued through my time at the U.S. Environmental Protection Agency and at Ecologic Institute, that is the need for improved methods in social science.
I’ve written about agent based modeling and done a couple presentations at Ecologic Institute and have made it the core method of my Humboldt work. The second line has been more evasive, however, even though I have been thinking about it far longer. As a specific project over two and a half years, but it dates back at the very least to my early days in D.C., and maybe even back to my undergraduate years. Continue reading
Science! So often misunderstood as being cold and complicated. But actually such a human and creative enterprise. So argued Dr. Roald Hoffman in his excellent keynote at the Alexander von Humboldt Foundation’s 2014 Annual Meeting. Indeed, he argued, the technical jargony linguistic style of the academic article, invented to exclude natural philosophers such as Goethe from the field, now serves only to confuse and obscure. And chemistry itself may now move to far more creative heights, instead of simply realistically representing objects found in nature, it may start creating wholly abstract new chemical structures. Structures that may come in use in a new field called combinatorial chemistry. Continue reading