Abstracts
Andrew Butterfield (University of Dublin, Ireland): Reasoning about I/O in functional programs
We look at formalisms for reasoning about the effects of I/O in pure functional programs, covering both the monadic I/O of Haskell and the uniqueness-based framework used by Clean. The material will cover comparative studies of I/O reasoning for Haskell, Clean and a C-like language, as well as describing the formal infrastructure needed and tool support available to do such reasoning.
Clemens Grelck (University of Amsterdam, The Netherlands): Multicore SAC
We present the design of the data parallel, functional language SaC (Single Assignment C). SaC combines a purely functional, stateless and side-effect free semantics with a syntax familiar to imperative programmers: functional programming with curly brackets. SaC not only features stateless multi-dimensional arrays as first class citizens, but in fact any SaC expression evaluates to an array. Two features are characteristic for SaC: firstly, code that abstracts not only from the extent of arrays along individual dimensions but likewise from the number of dimensions; and secondly, an (almost) index-free style of programming that treats arrays in a holistic way rather than as loose collections of elements.
These features make SaC a high-productivity language for computationally intensive application domains. At the same time SaC also is a high performance language competing with C and Fortran in these application areas through compilation technology. The abstract view on arrays combined with a functional semantics support far-reaching program transformations. In conjunction with the data parallel approach SaC is an ideal candidate for automatic compilation to a variety of multi/many-core architectures. The SaC compiler currently generates code for standard SMP architectures, NVidia GPGPUs and the MicroGrid, a many-core architecture developed at the University of Amsterdam.
The CEFP lecture provides an introduction into the basic language design concepts of SaC, followed by an illustration how these concepts can be harnessed to write highly abstract, reusable and elegant code. We conclude with outlining the major compiler technology challenges for achieving runtime performance levels that are competitive with low-level machine-oriented programming environments across a variety of multi/many-core architectures.
Johan Jeuring (Utrecht University, The Netherlands): Strategies for learning functional programming
In these lectures I will introduce an interactive tool that supports writing simple functional programs. Using this tool, students learning functional programming:
develop their programs incrementally,
receive feedback about whether or not they are on the right track,
can ask for a hint when they are stuck,
get suggestions about how to refactor their program.
The tool itself is implemented as a functional program, and uses fundamental concepts such as rewriting, parsing, strategies, program transformations and higher-order combinators such as the fold. I will introduce these concepts, and show how they are used in the implementation of the interactive functional programming tutor.
Rita Loogen (Philipps University Marburg, Germany): Eden
Eden is a parallel functional programming language which extends Haskell with constructs for the definition and instantiation of parallel processes. Common and sophisticated parallel communication patterns and topologies, i.e. algorithmic skeletons, are provided as higher-order functions in a skeleton library written in Eden. Eden is geared toward distributed settings, i.e. processes do not share any data.
Simon Marlow (Microsoft Research, United Kingdom): Parallel and Concurrent Haskell
Being a purely functional language, Haskell is an attractive platform on which to build and experiment with parallel and concurrent programming paradigms. Over the lifetime of Haskell, many people have been doing just that, with the result that Haskell has an incredibly rich ecosystem of parallel and concurrent models with which we can write our programs.
In these lectures I plan to introduce the more popular of the programming models supported by Haskell, to hopefully give you a taste for programming multicores in Haskell. In the parallel space, Haskell provides deterministic parallelism, allowing parallel algorithms to be clearly and succinctly expressed. In the concurrent space, Haskell provides lightweight threading and a range of high-level communication abstractions (such as STM), which make programming concurrently and asynchronously a breeze. I hope to include practical exercises which demonstrate the best of each of these paradigms; we'll speed up some real programs using Haskell's parallelism, and use concurrency to write some interactive programs using asynchronous techniques.
Greg Michaelson (Heriot-Watt University Edinburgh, Scotland): Box Calculus
The box calculus (Grov & Michaelson, 2010) is a formal system for reasoning about boxed based systems. The box calculus was developed for the Hume language which has an explicit separation of finite state coordination between concurrent boxes and functional control within boxes. Thus, the box calculus can account for program transformations between as well as within boxes. We will survey standard techniques for reasoning about functional languages, introduce Hume programming, meet the box calculus and consider how we may use it to refactor Hume programs to optimise concurrency.
Rinus Plasmeijer (Radboud University Nijmegen, The Netherlands): Defining distributed GUI applications with iTasks
The iTask system is a combinator library written in Clean offering a
declarative, domain-specific language for defining workflows. From a
declarative specification, a complete multi-user, web-enabled, workflow
management system (WFMS) is generated.
The iTask system makes heavily use of generic programming techniques. For
any first order type, a web page for editing values of that type can be
generated automatically. In this way, distributed, interactive tasks can be
created. The distribution of tasks over users and the order in which tasks
are performed are defined by the iTask workflow combinators.
Using similar generic techniques, GUI elements such as buttons, menus,
dialogues, and windows can be defined in a declarative way as well.
Combinators control their evaluation order. A task can now become a complete
GUI application which can be used to accomplish the task to be done. A task
can provide a view on shared data, which serves as shared model. In this way
distributed, multi-user GUI applications can be defined such as chat
applications, or dashboard applications where the actions of one user can be
seen by others.
Hence the iTask system can be regarded as a new paradigm for programming
distributed GUI applications declaratively, on a high level of abstraction.
Kostis Sagonas (National Technical University of Athens, Greece): Multicore Erlang
Mary Sheeran (Chalmers University of Technology, Sweeden): Feldspar: Implementation and Application
We will present Feldspar, a Domain Specific Language for Digital Signal Processing algorithm design. Feldspar is implemented as an embedded language in Haskell. We will explain the rationale behind Feldspar's design, and demonstrate via examples how it gives a higher level approach to writing array-manipulating C code. We will show in detail how the frontend of Feldspar is implemented as an embedded DSL, and through this example cover important topics such as the distinction between shallow and deep embeddings.