2016-10-14

PostDoc Position, Software Testing and Analysis / Software Engineering

At my Chair at Saarland University, we currently have a PostDoc position available, to be filled during the Spring of 2017.  We are looking for applicants in all areas of Software Engineering, with a special focus on experience in
  • Software Testing and Analysis
  • Security and Privacy
  • Specification and Specification Mining
  • Data Mining and Machine Learning
Your job is to conduct bold and risky research, possibly interacting with the several great PhD students at the chair.  Our most recent projects include
If such work inspires you, and if you see chances to combine your expertise and creativity with ours, we'll be happy to hear from you.  Please have a look at the official announcement and apply before December 1.

2016-09-07

On Facts and Dreams in Software Research


Where is the economic argument in program verification research, and where are the salvation messages in software engineering research?

I just had the pleasure of attending David Rosenblum’s keynote at ASE 2016, in which he praised the power of probabilistic thinking (using stochastic reasoning to estimate risks and support decisions) versus an absolutistic view in which there is only 100% truth or 100% failure.  His takeaway message was that software engineers should embrace probabilistic methods and permeate them throughout the development process.

As a software engineer, probabilistic thinking is a central part of my day-to-day job – in development as in research.  Whatever I build and design, I think about how useful it might be, which benefits it may bring, and what the associated risks are.  In Software Engineering research, the dominating metric is usefulness, which eventually translates into an economic argument: What does it cost? What are the benefits? What are the risks?  These are day-to-day questions in software development, and my research is expected to provide facts to help answer them.

In other fields of computer science, such as software verification, the economic perspective is much less argued about.  Instead, the message touted is an absolutistic message of salvation: If you apply this technique, you will get a 100% guarantee that your software satisfies certain requirements – that the computation will be correct, that it will not crash, or that it will terminate within specific time bounds.  For developers and society, this message feels like the second coming of Christ: The day will come and all your problems will be gone.  

Up to now, there are great examples of how software verification works, and there are impressive examples of it being successfully used in practice; and just to be clear: By no means would I want to reduce these research efforts.  But the systems formal verification is applied to are still small and constrained; and getting it to scale and widen will require enormous human effort reshaping existing systems – before salvation comes repent.  Do we actually know how costly formally verified software is?  Do we know how to teach its techniques to developers who do not hold a PhD?  Can we estimate the risks it brings to rely on freshly written specifications, rather than on mature systems?  Might "good enough" software be good enough?  When talking to researchers in software verification, such economic arguments are frequently missing in the debate. But as a society, we need to understand where money is best spent, and answers to such questions could very well guide the field of software research.

Conversely, just as formal verification may benefit from introducing an economic perspective, Software Engineering may profit from adapting a stronger salvation perspective.  What is it that Software Engineering research could produce that would be seen as a salvation in software development – a significant reduction of costs or risks?  Recent topics that come to my mind are automatic software debugging and repair, steering development processes based on mining software histories, or massive automated test generation.  Can we translate our capabilities into grand challenges that will provide salvation in the future, possibly even including guarantees?  Yes, I know there are “no silver bullets” in software development.  But any great research community should pursue great dreams as well as provide facts – and in this, all of us computer science researchers are in the same boat.

2016-06-28

Spiegel Online nutzt unsichere Cäsar-Chiffre: So lassen sich Spiegel Plus-Artikel lesen, ohne zu bezahlen

Der Spiegel-Verlag wagt einen neuen Versuch, mit Online-Journalismus Geld zu verdienen: Mit "Spiegel Plus" werden einzelne Artikel aus dem Online- und Magazin-Angebot gegen Geld angeboten.  Ich finde das aus mehreren Gründen gut:
  • Guter Journalismus will bezahlt werden
  • Ich zahle lieber für Artikel, als dass ich mit Werbung zugefüllt werde; und 
  • Ich bin sowieso Spiegel-Abonnent und kann somit ohnehin auf die Artikel zugreifen.
Die Art und Weise, wie Spiegel Online seine Paywall implementiert hat, ist allerdings noch verbesserungsbedürftig: Jeder Depp mit elementaren Programmier- und Verschlüsselungskenntnissen (= ich) kann die Sperre bequem umgehen, und mit Verfahren, die sich in jedem Kindersachbuch zum Thema Verschlüsselung finden lassen:

1. Man nehme einen Mac und surfe mit dem Safari-Browser die gewünschte Seite an – etwa diese.  Es folgt die Zahlungsaufforderung.  (Dummerweise gibt es keine Möglichkeit für Abonnenten wie mich, sich einzuloggen – was die nächsten Schritte motivierte.)


2. Man schalte den Browser in den Lese-Modus (Shift-Command-R).  Jetzt kommt der gesamte Artikel – allerdings ist der zu bezahlende Teil verschlüsselt.
3. Offensichtlich werden hier einzelne Zeichen durch andere Zeichen ersetzt – Leerzeichen und Absatzzwischenräume sind nach wie vor erkennbar.  Gleich zwei Absätze beginnen mit "Jo" – offensichtlich ein häufiges Wort.  Nehme ich jeweils den vorangegangenen Buchstaben, erhalte ich hier "In".  Gleichermaßen wird "Tfju" zu "Seit", und "Ebt" zu "Das".  Die Entwickler haben den Text mit einer Cäsar-Chiffre verschlüsselt – jeder Buchstabe wird durch seinen Nachfolger ersetzt.  Das ist das einfachste und unsicherste Verfahren der Verschlüsselung – und eine Beleidigung für jeden Sicherheitsexperten.

4. Wir markieren den verschlüsselten Text und kopieren ihn in die Zwischenablage (Command-C).  Dann surfen wir zu einer Seite, die die Cäsar-Verschlüsselung entschlüsselt – etwa hier.  Dort fügen wir den kopierten Text ins untere Feld, geben "1" als Verschiebung ein, und drücken auf "Decode" – und schon erscheint oben der entschlüsselte Artikeltext.  Umlaute und "z" sind zwar noch kaputt, aber dies sei dem geneigten Programmierer zur Übung überlassen.


Danke, lieber Spiegel, dass Du Dich seit Jahrzehnten an die Speerspitze des deutschen investigativen Journalismus stellst.  Jetzt musst Du nur noch das Schlusslicht in Sachen Verschlüsselung abgeben, und Du hast meinen vollen Respekt.  Kleiner Tipp: Unser CISPA-Institut gibt gerne Hinweise zu sicheren Verschlüsselungs- und Zahlungsverfahren :-)

Update vom 29.06.2016: Matthias Streitz von der Spiegel-Chefredaktion hat sich bei mir gemeldet – er dankt für den Hinweis und will prüfen, ob ihnen "noch etwas Schlaueres einfällt als die Cäsar-Verschiebung".  Ich bin optimistisch, dass ihnen das gelingen wird.

Update vom 20.06.2016: Die "niedrige Paywall" ist auch von anderen beobachtet wurden.

2016-05-23

My first talk at a scientific conference: A complete and utter disaster

I am just returning from ICSE 2016 in Austin, Texas; and once more, I have been impressed by the great many research talks.  For many of the presenters, this might have been there very first talk, and I was happy to see that it went pretty well for everyone.  My very first presentation at a scientific conference did not go so well.  Actually, it was a complete disaster.  But it eventually made me a professor.

This happened in 1993, at the German Software Engineering "SE" Conference in Dortmund.  I was a PhD student in my second year, and my advisor, Gregor Snelting, had asked me to present a summary of our group's research.  At this time, we had worked on semantics-based retrieval of software components: You would enter the desired pre- and postconditions of your component in a search field, and the system would automatically retrieve a suitable component.  The novelty was that we were using theorem provers to find possibly weaker preconditions or possibly stronger postconditions.  At the time, theorem provers were just about to enter the programming language space, so this was all new and unheard of.

So, here I was, standing on the stage, in front of about 100 German-speaking SE researchers, presenting the NORA system.  (NORA stood for No Real Acronym.)  And I would be showing how to search for a function given a simple postcondition – something like


This, as the specialist will immediately recognize, is the postcondition for a sorting function: The output a'[] is sorted (1), and also is a permutation of the input a[] (2).  I think I was right in the middle of explaining the formula, when I saw that some member of the audience had stood up, looking right at me.  Then, with a defiant look on his face. he shouted:

"When I want to search for a sorting function, I do grep sort!"

"grep sort" is a UNIX command, and it would mean to simply search through a list of textual descriptions to find a sorting function.  The audience was absolutely silent.  For a second.  Then, the whole room erupted with laughter. I peeked at Gregor, my advisor, who was sitting in the front row.  He crossed his arms and then grinned at me: How would I get out of this?  I stammered something along the lines: Well, that's of course true, but assume you have no knowledge of sorting, you don't even know that the word "sort" exist, then with our method, you would still be able to find something.  The shouter would just chuckle, shake his head, and then sit down.  The idea that a programmer could not now what sorting means did not stroke him as realistic.

I was just about to recover from that blow and about to put up the slide with the theorem prover diagram, when the next guy stood up, a mocking smile on his lips.  "Yes?", I said.

"You know, to me this looks like a solution looking for a problem."

Again, the entire room erupted with laughter. I don't remember how I replied, but it was just as unconvincing as before.   I was finished, publicly humiliated and ridiculed.  Putting together the rest of my dignity, I quickly showed the remaining slides.  I think I got some applause at the end, and even a question.  Still, for the rest of the day, I was marked.  Every time a participant saw me, a smile would flash over her or his face for a fraction of a second, remembering my ridicule.

"Who were these folks?" I asked Gregor.  "Congrats", he said.  "These were Professors Manfred Nagl and Jochen Ludewig, two really big shots in Germany's Software Engineering.  You managed to provoke them.  That's good."  – "Good?" I said.  "These folks just ridiculed me in the front of the entire audience.  How would that be good?" – "Well, we'll be known!", he said.  "Yes, but for what?" I replied.  All the way home, I would be furious at these two hecklers who had so rudely ruined my presentation.  And I vowed that one day, I would become a real big researcher, and I would have the last laugh over them.

Over the course of the next twenty years, I would be busy fulfilling my vow, and yes, it sort of worked – never again would a member of the audience shout stuff at me.  (But then, this may also be due to me never again presenting formal methods at a Software Engineering audience.)  

The last laugh, though, never came to be. Two years ago, as I last met Professor Nagl and Professor Ludewig, I asked them whether they would still remember how we met the first time, with me on the stage, and them standing up in the middle of the talk; and I wanted to thank them for how their comments had fueled and ignited my ambition for decades.  Of course, they would not remember a thing.  For them, it was just a minor laugh among many.

2016-04-17

The new ICSE Erdős penalty, or why we should create incentives for frequent reviewers

Ever heard of Paul Erdős?  The 20th century Hungarian mathematician is not only known for his numerous contributions to Mathematics, but also for his multiple collaborations, engaging more than 500 collaborators.  Frequently, he would just show up on their doorstep, work with them for some hours, and then get a joint paper out of that.  A low Erdős number indicates academic closeness to Erdős, and is something one can brag about at academic venues.  Yet, if today, Paul Erdős knocked on your door, and asked whether you would like to work with him, you should avoid any collaboration with him – if you work in Software Engineering, that is.  Why is that?

The International Conference on Software Engineering, or ICSE for short, is the flagship conference of the field of Software Engineering.  If you want to publish and present your greatest work, this is where you submit it.  An ICSE submission is reviewed by three peer researchers, whose assessment eventually determines whether your work is accepted or not.  Even if your work gets rejected, you at least get detailed reviews and high quality feedback.

Over the years, ICSE has observed that there were authors who apparently were way more interested in the reviews than in getting their papers accepted; authors who would submit up to ten papers, of which none got in; but the authors would at least get thirty reviews, all for free.  This motivated ICSE to install a new limitation: Any single author can now appear only on up to three papers.  If you have four papers ready for submission, then you are supposed to select the best three.

The ICSE program chairs argue that few authors and even fewer acceptances would be affected by this decision.  But the problem with this decision is not the factual impact.  It is the potential impact.  What if a modern Paul Erdős knocked on your door and offered you to work with him?  You'd have to say no, because he would have too many co-authored submissions already.  What if you could not submit to the past conference, because you were the one organizing it, and still have work that wants to get published?  What if four of your students all have great results at the same time, results that should be shouted out to the world?  Well, too bad: you can only submit three of them, causing depression in the fourth student how is left out.  None of these is likely to happen, but the fact that it could happen is causing concerns and anxiety, and rightly so.  An open petition asking ICSE to revert its new rules has gained dozens of supporters overnight. (Disclaimer: me too.)

The Software Engineering community has members who have literally devoted their lives to Software Engineering research.  They have no spouses, no kids, they work day and night.  The boys would be out for a skiing weekend, and the girls would be out in their summer clothes – these folks are busy on the paper that they hope will make them famous.  They serve on program committees, they write reviews, they organize conferences, they help others on their PhD theses.  They are amazingly productive both on their own work as well as on the work of others.  And these are the men and women whom the new ICSE rules send the message: Thank you, but no, thank you.

The problem of ICSE – and our community in general – is not so much an abundance of papers.  It is the lack of reviews.  It is our publications that determine our academic worth; much less so teaching; and even less so service. Great papers get you tenure and a raise, whereas great reviewing might get you a committee dinner.  Rationally thinking, why should one spend time on reviews while one might just write papers that would get reviewed by others?  Fortunately, the large majority of our community is still driven by the Categorical Imperative: We profit from the reviews of others, so we review their papers, too.  What we don't like is members who game the system by not only submitting lots of papers, but also not participating in the review process.

Therefore, what our community needs to do is twofold.  First, we need to think about reviewing processes that scale well and get high-quality reviews.  The ICSE program board model is a step in the right direction; a VLDB-like journal model might be even better.  Second, we should not penalize researchers for their own productivity; but instead create incentives for researchers who spend great effort on reviews and service.  Rule by the carrot, not by the stick.

Such incentives for service should not be monetary (these wouldn't motivate researchers anyway); nor should they result in a different reviewing or acceptance process (this would be perceived as unfair).  But how about raising the limit of submissions if you have a co-author who is also a frequent reviewer?  Or allowing reviewing volunteers to apply for a one-day extension to the conference deadline?  (You'd get plenty of applications on the last day :-)  Or provide "fast track" journal reviewing for those authors who sport a status of "distinguished reviewer"?  With such incentives, if a prolific reviewer like Paul Erdős knocks on your door, you would not boot him, but embrace him instead.

2016-04-09

Four security flaws illustrated, all on one conference registration site

When you're organizing a big scientific conference – conventions where scientists from across the world would convene to exchange their latest and greatest –, you have to think about zillions of different things: rooms, projectors, food, coffee, budget, accommodations, badges, speakers, leaflets, dinners, just to name a few.  It is impossible not to make mistakes, but it is generally possible to fix them once you know.  The worst mistakes, though, are the ones you never thought they could be made.

Some time ago, I registered for one of these scientific conferences.  The process is simple: You enter your details, select optional packages, finally enter your credit card data, and you're done.  This being a computer science conference, you would think your data is all secure in the hand of experts.  As I can now tell you from experience, this assumption is wrong.  Very wrong.  This single registration system contained not just one security flaw, but four – all independent of each other.

My Registration Screen

Security Flaw #1: The identifiable ID, or How I would be able to access the data of every conference participant

The fun began when I got my confirmation e-mail.  Apparently, I was the first person to have registered with the conference – because my participant number was one.  ("Hey – look at me; I am participant number one!" I said.)  In my e-mail, I also got a link with which I would be able to access my registration.  Following it would immediately lead me to the above registration screen.



The link was a bit unusual, because it would not contain any other "secret" information or token other than my participant number, though.  (You might assume it would encode my name, my ZIP code, or some other information tied to me only.)  So I asked myself: What would happen if I change the link from "?parm1=1" to, say, "?parm1=2" – that is, participant number two?  I entered the link into my browser, and immediately, I saw the registration screen of Lars Grunske, a colleague of mine in Stuttgart, Germany.

Lars Grunske's registration screen

Now having the same privileges as Lars, I would be able to read and change all data at will.  The idea arose to have him buy a few extra dinner tickets at his expense, but only briefly so.  Trying the same for further participants gave the same (Hi Abhik!,  ¡Hola Yasiel!).

In 2011, a similar mistake was made by UNESCO, who also used consecutive numbers for its internship applicants, and who thus leaked hundreds of thousands of applicant records on the Web.  (German Article on Spiegel.de)  What do you do when you discover such a problem?  To protect the integrity of participant data, I dutifully reported the problem to the organizers, who immediately replied the issue would be fixed as soon as possible.

Lesson 1: When handling personal data, set it up such that access requires a secret that cannot be easily guessed.


Security Flaw #2: The Unsanitized input, or How I easily bypassed password checks

The next day, I got a new mail from the organizers.  In addition to lots of high end security stuff (which would not protect from guessing a participant number), they now had introduced a secret word only known to the registrant, commonly known as a password.



Okay.  I went to the site, and it indeed now requested that I enter my ID and password.

Revised login interstitial screen
Problem solved? Not at all.  I sent the above mail to my Post-Docs Marcel and Juan Pablo "JP" Galeotti, whom I had talked about the problem the day before.  Minutes later, Marcel Böhme sent me back an intriguing message:



Incredible.  Could one really attack the system this way?  Ten minutes later, JP chimed in with



Ha!  Indeed, this worked like a charm.  Eventually, I would simply enter "2' -- " as my ID, and any string as my password – and again, I would be Lars Grunske, and would be able to alter his data at will.  Likewise, anyone with the above trick could do the same to my data.

How does this work?  Internally, the conference registration system uses a database that is controlled by so-called SQL commands.  When I enter my ID, say, "1", and my password, say, "1234", the system selects my data from the database using a SQL command looking like this:

    SELECT * FROM REGISTRATIONS WHERE ID = '1' AND PASSWORD = '1234'

Note how the number I entered as ID ("1") becomes part of the command.  By entering "2' -- " as ID and "whatever" as password. we get the command

    SELECT * FROM REGISTRATIONS WHERE ID = '2' -- 'AND PASSWORD = 'whatever'

In a SQL command, anything starting with two dashes "--" is treated as a comment and ignored.  So the system simply fetches the data from the registrant whose ID is 2, ignoring the password.  This is known as a SQL injection attack, and the standard way to avoid these is to filter out all characters that would have a special meaning in SQL commands (like "'" or "--").

Refining my ID to, say, "2'; DROP TABLE REGISTRATIONS; -- ", I might even have been able to delete all registration data.  (I hope they do backups!)  How could one set up a SQL-based system and never have heard about SQL injection?  Now this was beginning to get embarrassing.

Lesson 2: When setting up a publicly accessible service, identify common attack vectors and protect against them – for Web sites: buffer overflows, SQL injection, cross-site scripting, etc.


Security Flaw #3:  Plaintext Passwords, or How I would now also steal personal passwords from all participants

But the embarrassment was not over yet.  Remember how the e-mail above asked users to set up their own passwords?  It turned out that the passwords were actually stored and displayed in the clear, as seen on Lars' revised registration screen:

Lars Grunske's registration screen, now with password

The password listed was Lars' password; saving it would allow me to log in with his user ID and password.  I could easily have skimmed all passwords of all participants and I could have logged in long after the SQL vulnerability had been fixed.

But storing passwords in the clear is a bad practice for many more reasons.  It gives the administrator access to all passwords, which provides an opportunity for thefts.  Plus, and this is probably the worst: Many people tend to use the same passwords for different sites.  Had Lars indeed changed his password as requested, and for instance chosen the same password he would use for Amazon or eBay, I would be able to log in at these sites on his behalf, and happily order stuff.

Had Lars used the same password he also uses for his mail, I could have accessed all of his passwords, everywhere – a simple click on "I have forgotten my password" links would have triggered "password reset" mails to his account, which I could easily have skimmed.  I'm a nice guy, so I did none of this.  (Plus, I have a cool joint research project with Lars.)

To cut a long story short: I sent another urgent report to the organizer, and hours later, the SQL vulnerability was closed.  The one we had found, that is.  No idea whether other vulnerabilities would be hidden somewhere in there, or how the system had been tested for security, if at all.

Lesson 3: Passwords should never be stored, displayed, or transmitted in the clear.  Store hashes instead; and if a user requests a new password, create a new one instead.


Security Flaw #4: Compromised Forever, or How nobody would be able to change their passwords

Was it really the case that Lars had ignored the instructions and kept his original password?  After all that had happened, I thought that maybe someone had done the same that I had done, and thus now had access to my conference password.  So I decided to change it online.  However, it turned out that changing the password did not work – you would always retain the old one, which would still be happily displayed for you.  The good news was that this way, nobody would have been able to reuse existing passwords – and anyway, had I really wanted my password changed, another mail to the organizers might have done the trick.  At this point, I decided not to further stress the relationship between the organizers and their software developers and leave this be.

Lesson 4: Always allow your users to reset their access data if they fear it may have been compromised.

All was well that ended well: The conference was truly magnificent, and as far as I know, nobody's data got compromised in any way.  Of course, anybody could have gone through the steps described above, skimming data without ever reporting.  But luckily, the first registrant (me) pointed out the issues before some fraudster could have spotted it, and of course, any of my colleagues would have done just the same.  When your customers are nice people, consider yourself lucky.

Post Scriptum: The Horrible Homebrew, or Why it may be better to build on well-tested platforms

This brings me to the meta lesson to be learned here – and this tells something about process rather than product.  If you set up a system from scratch, be it a conference management system, a shop, a student registration system, whatever, be aware of the many risks this entails, and be sure to have independent and thorough security testing.  Using an existing, established, well-tested system instead may lower risk and overall cost, even if it may cost more upfront.  When the damage is done, you wish you had decided differently – but then, it may be too late.

Final Lesson:  When deciding between building and using a system, consider all risks and associated costs. If you build a new system, thoroughly test it for security.  If you use an existing system, be sure it is well tested.

2016-03-10

Mining Sandboxes for Security

For the past three years, my students and I have worked on a novel and general approach to address software security.  This week at CeBIT, we're happy to spread some good news.

The concept of "Mining Sandboxes" protects against unexpected changes of software behavior and thus drastically reduces the attack surface of software systems. Our "Boxmate" prototype automatically mines program behavior by executing generated tests, systematically exploring the program’s behavior together with the accessed resources. The collected behavior rules form a sandbox, which at production time prohibits behavior not seen during testing. This brings several compelling features:

No unexpected behavior changes. The mined sandbox prevents behavior changes caused by latent malware, vulnerability exploitations, malware infections, or targeted attacks.

Closing the backdoors. The mined sandbox protects against backdoors that would not be discovered during normal usage.

No malware patterns required. The approach assumes no information about earlier or future attacks; it protects against known and novel attacks alike.

No training in production. In contrast to anomaly detection systems, all “normal” behavior is already explored during testing. The program is protected even before its first deployment.

No code required.  We require no knowledge about source or binary code, and thus can handle obfuscated, obscure, or adverse programs.

Want to get a brief overview of how it works? Here's a video, narrated by yours truly:


Conceptually, the techniques of Mining Sandboxes scale to arbitrary code size and can be applied to mobile apps, embedded systems, and server software alike.  Mining sandboxes is fully automatic, such that vendors, developers, and users can mine, inspect, compare, and exchange sandboxes at any time.  And for the testing researchers among us: This research leverages the incompleteness of testing to turn it into an advantage; and to actually produce guarantees from testing.

At this point, all of this is still research, so you cannot yet buy this in a shop near you.  And as with any big set of benefits, there's also drawbacks, in particular with legitimate functionality not found during testing – and there's still loads and loads of things to do.  But our first results with our prototype on real-world apps are more than promising; we have a nice paper coming up at the ICSE conference in May 2016, Austin, Texas.  And I am even more excited about this work than any other pioneering work of ours before, even more than say, Delta Debugging, or Mining Software Repositories – because if it succeeds, it would not only impact the lives of software developers, but actually address many of the software security problems we see in the news every day.

If this has captured your attention, you can read more about the project at its site: http://www.boxmate.org/.  Or you can visit us at the CeBIT computer fair in Hannover between March 14 and March 18 (Hall 6, Stand D 28); I will be on site Monday and Tuesday.  I'd love to get engaged in discussions!