Modelica 2014

Xogeny at Modelica'2014

Xogeny was formed just before the last Modelica Conference. At Modelica'2014 we will be presenting lots of stuff we've been working on since then.

Here is some of the stuff we'll be presenting...

Modelica Conference App

Modelica events always have lots of quality presentations. I'm sure this year's event will be no different. At Xogeny, we were so excited about the conference program, we made a web app to help attendees organize their time.

Free Book: Modelica by Example

Another big thing we'll be talking about is the upcoming book Modelica by Example written by Dr. Michael Tiller. This was initiated as a Kickstarter project with the explicit goal of making an HTML version of the book available for free.

We'll have a big announcement to make with regards to the book. We aren't ready to spill the beans just yet, but if you want to keep up on the latest news regarding the book, sign up for the book's mailing list.

FMQ

We'll be presenting our work with Oak Ridge National Labs building a web-based user interface Small Modular Nuclear Reactor design. This work was a collaboration between ORNL, Modelon and Xogeny.

Impact

At Xogeny we love open source. In fact, since the last conference, we've started a few of our own open source projects. The first is the impact project. Impact is a package manager for Modelica to make downloading libraries as easy as possible.

Recon

Another open source project initiated by Xogeny is the recon project. The recon project defines a new set of file formats for storing simulation results. These new formats provide greater flexibility than traditional formats and are optimized to minimize network and disk access for individual signals. See the recon documentation for more complete details.

Keeping in Touch

If you are interested in hearing about Xogeny related news, you can sign up for our mailing list. If you sign up, you can unsubscribe at any time.


Recent Presentations

I've given a number of presentations recently that are available at various places on the internet. I thought I should take a moment to catalog them in case people are interested in them.

Web-Based Engineering Analysis

Length: 20 minutes

This talk focuses on the possibilities that are enabled by FMI and uses Xogeny's FMQ platform to demonstrate the role that FMI can play in model deployment.

Modelica and Model Deployment

Length: 45 minutes

This talk was given at the "NAFEMS/INCOSE System Modeling and Simulation Working Group Meeting". It includes some introduction to the topics of Modelica and model deployment. It is a fairly high-level talk about how Modelica, as a technology, fits into the industrial system engineering process.

FMQ Platform

Length: 20 minutes

This talk was taken at the 2013 Detroit FMI Tech Day. So for this presentation, I talked about how the FMQ platform uses FMI to provide customers with a path to cloud based simulation and web-based engineering applications.

The content of this talk is very similar to that of my Web-Based Engineering Analysis talk above.

You can find all the videos from the 2013 Detroit FMI Tech Day here.


Visualizing Variability

Background

I was recently invited to speak at a special session title Modeling and Simulation: What are the Fundamental Skills and Practices to Impart to our Students?. I gave a short talk titled "Beyond System Dynamics and State-Space" emphasizing things I thought were important to consider beyond the purely mathematical considerations necessary to prepare students for the usual controls courses.

After my talk, during a short panel discussion, someone from the audience said something to the effect of:

I can give my students a set of differential equations and they know how to convert turn those differential equations into a simulatable model. The problem is that they simulate the system and they think that the answer they get is the answer. They don't understand that there are all kinds of uncertainties to consider.

Of course, the main issue here is how to convey to students an understanding about what uncertainty is and to make sure it is taken into account. At least this is how I think the person making this point thought about it.

I had a slightly different take. I suppose this is no surprise since most of the audience and everyone on the panel (except me) were from academia. Their thinking was how to structure the curriculum.

But I looked at this a bit differently. I've worked on modeling of systems where there was stochastic information available to characterize the uncertainty of different parameters in the system. So I understand not only that uncertainty exists but also how to express it. But even then, it's still hard to understand the true implications of that uncertainty. Knowing that one parameter has an uncertainty that can be represented as a uniform distribution of some kind and another has an uncertainty that can be represented as a normal distrbution doesn't actually give us a sense of what the actual impact on the solution is.

I responded to this point by pointing out that distributed CPU power has become a commodity and that something like uncertainty provides a high degree of "coarse grained" parallelism. In other words, you could use "the cloud" to help with this problem. I've mentioed my FMQ Platform previously. This shapes a lot of how I look at this kinds of problems. It turns out that after my talk at DSCC, I was scheduled to speak at the recent Detroit FMI Tech Day event. I wanted to take this question from DSCC and put together an application that leveraged FMI and FMQ to show how we can approach visualizing uncertainty.

I chose the Lotka-Volterra model as my example. I did this mainly because they dynamics are interesting (limit cycle, non-linear) and easy to understand. These days, almost everything I do is centered around the web. So, naturally, I created a web application as the first step. I created a dialog for editing the parameters of the Lotka-Volterra model and connected it to the FMQ Platform to support simulation.

If you just simulate the Lotka-Volterra equations without any uncertainty, you'd fill the parameter dialog in as follows:

If you press the "Analyze" button, you'll get a plot like this:

So this was exactly the point that was raised in the panel discussion. After running such a simulation, a student might look at this plot and think "OK, that's the answer". But, of course, it isn't. Because where did all those numbers in the parameter dialog come from? And more to the point, how accurate are they really?

So, let's modify the parameter dialog to include some uncertainties (specifically in the parameter affecting predation):

This will create a Monte-Carlo analysis where the baseline parameters are used and 50 additional simulations are done by generating parameters sets based on the uncertainty. In all, we'll get 51 different simulation results out of this analysis.

But the important point to understand here is that if you are using a distributed computing framework to run your analysis, the (wall clock) time it takes to run one analysis is the same as what it takes to run 51 (assuming you have the computing bandwidth to support 51 parallel jobs which, frankly, isn't very many). Cloud computing providers charge by CPU usage not concurrent usage. The important point is that running 51 jobs sequentially has the same cost as running 51 jobs in parallel.

Now if you press the "Analyze" button, you'll get a plot like this:

An important thing to note about this plot is the way coloring and opacity are used. If I had simply plotted each trajectory as a line, we'd have a nasty mess here. But instead, I plotted the baseline parameter set as a dark line of a given color and all the other trajectories involving uncertainty as a semi-transparent area between the uncertain result and the baseline result. If you run enough of these, you get a very interesting visualization where the shading gives a sense of the likelihood of passing through that point in state-space. Note that in addition to this "likelihood" dimension this type of visualization also conveys the "envelope" of the potential solutions (in much the way error bars would).

Of course, this opens up immediate questions. For example, what is the impact of initial conditions on these trajectories? So let's introduce a standard deviation of "1" on the initial values. In that case, the visualization looks like this:

In this way, we can examine the effects of individual uncertainties or combine them together to see the net effect of several uncertainties acting in concert.

Conclusion

The point of this exerise is to show that with distributed computing resources, analyses like Monte-Carlo analysis become more attractive because they can be completed in nearly the same amount of time as a "normal" analysis and yet, with the right visualizations, they can convey a great deal of useful information.

So this started out as a question about explaining the concept of uncertainty to students. While there are important pedagogical aspects at the heart of this issue, it is important to also consider that technology itself can help greatly in this regard. But we have to understand what the technologies are capable of and how to use them effectively.


FMI Technology Day - November 6th

Who, What, Where?

Modelon has organized an FMI Technology Day to be held on November 6th from 10:00am to 4:00pm EST in Royal Oak, MI. In fact, they've put together a nice video describing FMI and why you might want to attend the event:

If you are interested in attending the event, please register now since attendance is limited.

I'll be there talking about model deployment. Specifically, I'll be showing off some work I've been doing helping customers build web-based engineering analysis tools for their Modelica models. FMI is a key enabler in this work and I'll be describing not just how FMI fits in to the process, but how information captured through FMI helps us automate the process of creating these web-based user interfaces.

Other Speaking Engagements

In addition to the FMI Technology Day event, I'll be in the Bay Area speaking in a couple of different places:


What Engineers Need to Know About Version Control

Why Do I Need Version Control?

Reason #1: Danger

There are a lot of compelling reasons to use version control systems. Probably the most important reason is to mitigate the danger of losing your work. I frequently come across engineers who are completely unfamiliar with version control systems. Every time this happens, this is what I see:

OK, so these engineers aren't actually risking their lives when they don't use version control. But they are definitely working without a safety net. The question is...why?

I am always baffled by this situation. Why would anybody choose to take the unnecessary risk of losing any of their work to a disk crash, careless changes, accidental file deletion or other mishap when these issues are so easy to avoid.

As I'll cover shortly, modern version control systems are ubiquitous, easy to use and free. What possible barrier could there be to using one?

Reason #2: History

The risk of losing your work is probably the strongest argument I can make to an engineer about the value of version control. But another argument for version control is the fact that it gives you a way to manage all your files while maintaining a complete history.

I cannot tell you how many times I've seen a directory full of files with names like:

  • CalcProperties.f77.bak
  • ThesisModel.py.working
  • Circuit_jan23.mo
  • model.c.fred

This is, frankly, insanity. Most people who manage files this way would probably argue that their system "works for them", that they don't want to learn some "complicated system" or that version control is "overkill" for them. I, however, would argue that a version control system is easier to use than such ad hocs approaches and provides far, far, greater capabilities and features. The key point here is that when trying to get people to adopt any technology, there has to be a good return on investment. They need to get out of it more than they put into it. I would argue that for all modern version control system, you will get this return on investment even if you don't use all the fancy features.

I've heard engineers argue that they don't need a version control system because when they need to, they can create a zip/tar file and stash that away some where. They'll even argue that's pretty much what a version control system does. If you think this is true, you should read the excellent book Pro Git (free online version) so you can appreciate how version control systems really work.

Reason #3: Collaboration

The final reason for using version control (that I'll talk about, at least) is collaboration. I think of this reason as "icing on the cake" because a) it isn't really a strong enough argument by itself and b) for reasons I'll talk about later, it doesn't always apply to engineers.

I clearly remember once talking with a colleague of mine who worked in a modeling group. I was trying to explain to him the advantages of the configuration management features in Modelica to promote collaboration. He kept insisting he didn't need that. So I asked him "How do you collaborate with the other people in your group?". He then explained that they had a shared drive and they all just worked in that one directory. I asked him "But how do you manage the different configurations of the models?". He explained that they just saved each configuration with a different file name. To give you a sense of how sophisticated their taxonomical approach was, one of the files was called "TheBigKahuna".

If I found myself in such an environment, I would start pinching myself in the hope that I'd suddenly wake up, drenched in sweat with the comfort of knowing it was all just a nightmare.

Modern version control systems provide an outstanding platform for collaboration. They allow each developer to work independently when needed. But, they also allow people to collaborate by seamlessly and effortless pulling and pushing changes to each other.

But the possibilities for collaboration transcend the version control systems themselves. Various tools and applications layer features like documentation, issue tracking and other collaboration facilities on top of version control systems to create extremely rich and user friendly collaboration platforms.

How to Get Started

There are a variety of version control tools out there. Most engineers don't want a survey of tools, they just want to know what they should use and how to get started.

I would argue that the most user friendly version control system for engineers would probably be Subversion (SVN). For those engineers working on a Windows desktop, I would strongly recommend the TortoiseSVN software.

I can already hear the cries of those "in the know" screaming "Don't tell them that, Subversion is ancient history, Git is much better!"

You will note I said that Subversion was more user friendly and I stand by that statement. But it is true that SVN is increasingly falling out of fashion and, simultaneously, user interfaces for Git are improving. YMMV.

It isn't simply SVN itself that is losing mindshare, but the entire approach that SVN takes. You see, Subversion is an example of a centralized version control system. Each project has a single, central server that acts as the "database" for the entire history of branches, tags, commits, etc. This centralized approach is increasingly being phased out in favor of "Distributed Version Control Systems" (DVCSs). Among these DVCSs, the most widely used system is Git. And, as is the case for most version control systems, Windows users can use a nice graphical front end in the Testudinidae family called TortoiseGit. It is also worth mentioning another popular DVCS called, Mercurial, which is similar to Git in exactly the same way that Lilliput and Blefuscu are similar.

Personally, I have used Subversion, Mercurial and Git extensively but I currently use Git exclusively. Although the differences between Git and Mercurial are minor, it seems clear that Git is and will remain the dominant player in the DVCS space. One of Git's big advantages is that it is supported by collaborative platforms like GitHub, BitBucket and Trac. These tools can greatly enhance the capabilities of the underlying version control system.

GitHub is the darling of the open source world and it seems the vast majority of projects are hosted there (including several of Xogeny's). GitHub also features "private repositories" for non-open source projects. But if, for whatever reason, you are not able to use GitHub, I'd strongly recommend Trac since it can be self-hosted (e.g. behind a firewall).

The Good, The Bad and The Ugly

The Good

The tools I mentioned above are free. They are also well engineered tools that have been well hardened by use. They are all extremely reliable. These tools are also well documented with lots of books and online resources that you can reference.

The Bad

Actually, there is no bad. There is only good and ugly...

The Ugly

I mentioned that SVN is the most user friendly. This was not so much a comment on SVN as it was on Git. Git's command line interface is, to be generous, "confusing". Fortunately, you don't need to worry about this too much if you use a reasonable UI like TortoiseGit and you eventually get used to it (for better or worse). Ample documentation on Git helps here too.

But there is some additional ugliness to be aware of lurking out there. This is because, as alluded to earlier, you may have trouble using version control tools for engineering because your engineering tools make this unnecessarily difficult. I wrote about this in one of my earliest blog posts, "A Disturbing Trend". Those complications really only affect the collaboration part. You can still use version control tools to keep a complete history without issue.

Frankly, as engineers we should be pushing back on tool vendors who insist on breaking from traditional line oriented formats for engineering content to invent their own unnecessarily opaque formats that are incompatible with accepted practices in the software engineering world.

Conclusion

I recently gave a talk at "The Ohio State University" (they are very fussy about the "The"). The talk I was giving lamented the fact that engineering was pretty much the entire impetus for the development of both computer hardware and software but today engineering is so out of touch with modern concepts of computing. In this particular lecture, my host asked the audience (composed largely of people who build mathematical models) how many people had heard of Modelica. He was trying to underscore this point about engineering being out of touch with the latest computing trends. As much as I like Modelica and as much as I would agree that ignorance of Modelica is indicative of this trend in some small way, I interrupted to add "You are asking the wrong question...how many of you use version control?", I asked. Predictably, hardly anybody raised their hands.

While predictable, this response was still disappointing. As an engineer, you have many reasons to use version control and no reason to avoid it. Engineers often look at version control as a technology exclusively for software developers. But engineers routinely develop "software", they just don't realize it.

I'm afraid this post doesn't live up to my goal of providing some kind of all inclusive introduction for engineers to the world of version control. I've at least provided some pointers and some advice. Frankly, there are so many resources out there that it really isn't worth repeating them. I'll be happy to answer any questions or concerns in the comments though.