Accessibility statement

2016/17 seminar archive

Autumn 2016

12th October: Programming and Proving with Concurrent Resources

Ilya Sergey (UCL) 

In the past decade, significant progress has been made towards design and development of efficient concurrent data structures and algorithms, which take full advantage of parallel computations. Due to sophisticated interference scenarios and a large number of possible interactions between concurrent threads, working over the same shared data structures, ensuring full functional correctness of concurrent programs is challenging and error-prone.

In my talk, through a series of examples, I will introduce Fine-grained Concurrent Separation Logic (FCSL)---a mechanized logical framework for implementing and verifying fine-grained concurrent programs.

FCSL features a principled model of concurrent resources, which, in combination with a small number of program and proof-level commands, is sufficient to give useful specifications and verify a large class of state-of-the-art concurrent algorithms and data structures. By employing expressive type theory as a way to ascribe specifications to concurrent programs, FCSL achieves scalability: even though the proofs for libraries might be large, they are done just once.


19th October: Detecting All High-Level Dataraces in an RTOS Kernel

Deepak D'Souza (Indian Institute of Science) 

A high-level race occurs when an execution interleaves instructions corresponding to user-annotated critical accesses to shared memory structures. Such races are good indicators of atomicity violations. We propose a technique for detecting *all* high-level dataraces in a system library like the kernel API of a real-time operating system (RTOS) that relies on flag-based scheduling and synchronization. Our methodology is based on model-checking, but relies on a meta-argument to bound the number of task processes needed to orchestrate a race. We describe our approach in the context of FreeRTOS, a popular RTOS in the embedded domain.

This is joint work with Suvam Mukherjee and S. Arun Kumar.


2nd November: Integrating Symbols into Deep Learning

Prof Stephen Muggleton FREng (Imperial College London) 

Computer Science is the symbolic science of programming, incorporating techniques for representing and reasoning about the semantics, correctness and synthesis of computer programs. Recent techniques involving the learning of deep neural networks has challenged the "human programmer" model of Computer Science by showing that bottom-up approaches to program synthesis from sensory data can achieve impressive results ranging from visual scene analysis, expert level play in Atari games and world-class play in complex board games such as Go. Alongside the successes of Deep Learning increasing concerns are being voiced in the public domain concerning the deployment of fully automated systems with unexpected and undesirable behaviours. In this presentation we will discuss the state-of-the-art and future challenges of Machine Learning technologies which promise the transparency of symbolic Computer Science with the power and reach of sub-symbolic Deep Learning. We will discuss both weak and strong integration models for symbolic and sub-symbolic Machine Learning.

Bio: Stephen H. Muggleton FBCS, FIET, FAAAI, FECCAI, FSB, FREng is Professor of Machine Learning and Head of the Computational Bioinformatics Laboratory at Imperial College London. Stephen Muggleton is currently Director of the Syngenta University Innovation Centre at Imperial College and holds a Royal Academy of Engineering/Syngenta Research Chair. He received his Bachelor of Science degree in Computer Science (1982) and Doctor of Philosophy in Artificial Intelligence (1986) supervised by Donald Michie at the University of Edinburgh. Following his PhD, Prof. Muggleton went on to work as a postdoctoral research associate at the Turing Institute in Glasgow (1987–1991) and later an EPSRC Advanced Research Fellow at Oxford University Computing Laboratory (OUCL) (1992–1997) where he founded the Machine Learning Group. In 1997 he moved to the University of York and in 2001 to Imperial College London.


9th November: Build Systems at Scale

Andrey Mokhov (Newcastle)

Most build systems start small and simple, but over time grow into hairy monsters that few dare to touch. As we demonstrate in this talk, there are a few issues that cause build systems major scalability challenges, and many pervasively used build systems (e.g. Make) do not scale well.

We use functional programming to design abstractions for build systems, and implement them on top of the Shake library, which allows us to describe build rules and dependencies. To substantiate our claims, we engineer a new build system for the Glasgow Haskell Compiler. The result is more scalable, faster, and spectacularly more maintainable than its Make-based predecessor


23rd November: Computer Science as an Application Domain for Data Science

Colin Johnson (Kent) 

The last few years have witnessed a revolution in the application of data science across many aspects of human endeavour. However, the application back to problems in computer science itself has been rather thin. This talk will explore this issue in general, and discuss some examples of and opportunities for the application of data science to computer science and software development. This will include a number of examples, e.g. of the application of information-theoretic learning to fault localisation.

Bio: Colin Johnson is Associate Dean of Sciences and Reader in Computer Science at the University of Kent. His research interests are in the areas of machine learning and data mining, both in the development of new methods in those areas and in their applications, in areas such as bioinformatics, engineering, and digitial humanities. Recently, he has been investigating how data science techniques can be used in areas such as mathematics and software development.


30th November: Business Innovation for Digital Technologies

Dick Whittington (RAEng visiting professor, York) 

Throughout their technical education, engineers of all flavours build up a knowledge of technologies and the techniques and processes that make them work. But starting a business – or contributing effectively to an early stage business – or injecting innovation into an existing business – needs more than that. To contribute effectively to a business, it’s necessary also to know about notions of product, platform and service, about customers and ecosystems, about markets and their dynamics, about intellectual property and its protection, about risk, company finance, investment, governance and legal requirements; and above all, about what’s needed to create and scale something of value. This seminar explains how the Business Innovation course is addressing that gap, for students and staff anticipating an adventure into business. It will do this with reference to both the national and local economic context, and future options for the course.

Bio: Dick is a serial entrepreneur, business mentor and investor, with over 30 years of experience in business. He is co-founder of MooD International, a software business recognised through multiple Queen’s Awards: two for Innovation, and a third for International Trade. The company was voted “York Business of the Year” for 2015. In 2012 Dick was elected Fellow of the Royal Academy of Engineering. He plays an active role within the Academy, including within its successful Enterprise Hub where he acts as mentor for new spin-out technology companies. Funded by the Academy, in 2015 Dick was appointed Visiting Professor of Business Innovation at the University of York. He is also an active mentor and angel investor within several London and regional technology accelerator programmes.

Spring 2017

25th January: Developing new technologies in the era of person-centred healthcare: reflections of a health economist

Andrea Manca (Centre for Health Economics, York)

Person-centred healthcare (sometimes referred to as personalised medicine, precision medicine, etc.) is becoming one of the hottest topics on the public/private agenda worldwide. It has supporters among the industry, patients organisations, healthcare professionals, academics, funders and politicians. Thus, devoting energies and resources to pursue (and hopefully realise) the promises of person-centred healthcare would seem to be a win-win strategy for a number of stakeholders. Indeed, recent years have seen an acceleration in R&D efforts towards the development of novel person-centred diagnostics, drugs and medical devices (both therapeutic and support). But here lies the critical issue: how can Society shape the future of healthcare, identify and prioritise R&D investments towards acceptable, cost-effective and sustainable person-centred interventions with the highest return in terms of population health and other relevant outcomes? Cost-Effectiveness Analysis (CEA) for Health Technology Assessment (HTA) plays a pivotal role in informing such decisions in many jurisdictions around the world. This talk begins by describing the standard CEA framework and its use in HTA to inform R&D and technology adoption decisions in the UK and elsewhere. It then discusses the challenges of applying standard CEA methods to the evaluation of person-centred healthcare technologies, and provides examples of how these issues can be addressed. This talk may be of interest to researchers in Biology, Chemistry, Electronics, Computer Sciences and Physics who are actively pursuing initiatives into the area of new healthcare technologies or are potentially considering doing so. It is intended to be an opportunity for the audience to find out how health economics can support the translation of (clinically and patient-centred) relevant aspects of their research into the healthcare technologies of the future, hopefully opening up new collaborations and furthering the University of York vision towards interdisciplinarity of its research focus.


8th February: Broad Hearts And DeepMinds: Building Creative AI People Can Relate To 

Michael Cook (University of Falmouth) 

For most people working in AI and games these days the key word is 'superhuman'. How can we build AI to beat humans at Go, at Poker, at Starcraft? In Computational Creativity - a subfield of AI where people design software to paint, write poetry and design videogames - there's a more pressing question: why would anyone care what an AI did, anyway? In this talk I'll discuss the brief history of Computational Creativity; the software soupmaker that has no friends; the AI game designer that never tidies up; and the missing piece in today's AI narrative: failure.


15th February: Making sense of sensing

Simon Dobson (St Andrews) 

Sensor systems are increasingly important in providing robust, reliable, rich information to support scientific and policy activities. Obtaining that robustness, reliability, and richness, however, requires that we develop mechanisms for analysing the sensed data as it arrives, and for fitting it into a wider interpretive context that can be used by scientists and decision-makers. In this talk we discuss these challenges and explore a couple of approaches we've been investigating to improve our understanding of sensor systems as they degrade, and to improve the ways in which we classify sensor observations against expectations.

Bio: Simon Dobson is Professor of Computer Science at the University of St Andrews. He works on complex and sensor systems, especially on sensor analytics and the modelling of complex processes. His work has given rise to over 150 peer-reviewed papers and to leadership roles in research grants worth over £30M, most recently as part of a £5M EPSRC-funded programme grant in the Science of Sensor Systems Software. He was also the founder and CEO of a research-led start-up company. He holds a BSc and DPhil in computer science, is a Chartered Fellow of the British Computer Society, a Chartered Engineer and Senior Member of the IEEE and ACM.


1st March: Real-Time Wormhole Networks

Leandro Indrusiak (York)

Wormhole switching is a widely used network protocol, mostly because the small buffering requirements it imposes on each network router, which in turn results in low area and energy overheads. For instance, this is of key importance in multi-core and many-core processors based on networks-on-chip (NoC), as the area and energy share of the on-chip network itself can reach up to 30% of the area and energy used by the whole processor. However, the nature of wormhole switching allows a single packet to simultaneously acquire multiple links as it traverses the network, which can make worst-case packet latencies hard to predict. This becomes particularly severe in large and highly congested networks, where complex interference patterns become the norm. Still, worst case latency prediction models are needed if one needs to make such systems amenable to real-time applications.

Different link arbitration mechanisms can result in different worst-case latency prediction models, and recent research has addressed NoCs with TDM, round-robin and priority arbitration. In this talk, I will focus on priority-preemptive wormhole NoCs. I will give a detailed account of the architectural features that can support that type of arbitration in NoCs, and will review the latest research on analytical methods aimed at predicting worst-case packet latency over such networks. I will then show opportunities and advantages of using priority-preemptive NoCs in the domains of multi-mode, mixed-criticality and secure systems, where the trade-off between flexibility and predictability that is inherent to such networks can be fully exploited.


Bio: Leandro Soares Indrusiak is a faculty member of University of York's Computer Science department since 2008, and currently holds a readership. He is a member of the Real-Time Systems (RTS) research group, and his current research interests include on-chip multiprocessor systems, distributed embedded systems, resource allocation, cloud computing, and real-time networks, having published more than 120 peer-reviewed papers in the main international conferences and journals covering those topics (seven of them received best paper awards). He has graduated seven doctoral students over the past ten years, and currently supervises three doctoral students and three post-doc research associates.

He graduated in Electrical Engineering from the Federal University of Santa Maria (UFSM) in 1995 and obtained a MSc in Computer Science from the Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, in 1998. He held a tenured assistant professorship at the Informatics department of the Catholic University of Rio Grande do Sul (PUCRS) in Uruguaiana from 1998 to 2000. His PhD research started in 2000 at UFRGS and extended his MSc work on design automation environments for microelectronic circuits. From 2001 to 2008 he worked as a researcher at the Technische Universität Darmstadt, Darmstadt, Germany, where he finished his PhD and then lead a research group on System-on-Chip design. His binational doctoral degree was jointly awarded by UFRGS and TU Darmstadt in 2003.

He is the principal investigator of EU-funded SAFIRE and DreamCloud projects, and a co-investigator in a number of other funded projects. He has held visiting faculty positions in five different countries, is a member of the HiPEAC European Network of Excellence, and a senior member of the IEEE.


8th March: Systems Thinking for dealing with complexity in Design: the case of Service Design.

Ioannis Darzentas (Leverhulme Visiting Professor) 

Are the problems Design deals with always complex? or sometimes are they simply complicated. This talk will briefly introduce Systems Thinking as a way of tackling complex design problem spaces. The thesis is that at least the human centric ones (which by far are the majority) are always complex. Design Thinking in most cases acknowledges complexity but it has a paucity of methods to deal with it. Service Design will be introduced as a rapidly emerging area of human centric design. Service Design is naturally complex and a prime paradigm that gains from adopting Systems Thinking for Design problem space capturing, understanding and for moving towards praxis. An important outcome of our research, has been to reorient the focus of the Design Intervention in that products (tangible or intangible) are byproducts of Service Design.


15th March: Usable security: a view from HCI

Helen Petrie

Researchers in human-computer interaction (HCI) have long been interested in the problem of system security, particularly from the point of view of the usability of authentication systems. Angela Sasse coined the well known adage that “users are not the enemy” in the development of secure systems. However it is interesting that research in HCI has largely focussed on very specific parts of the authentication process, for example analysing the kinds of passwords people create, the kinds of potentially insecure behaviours they undertake in relation to passwords and so on. I can find no researchers who have thought about the password creation system (PCS) as a small interactive system (which is what HCI researchers study all the time) which can be studied for its usability and user experience. An obvious implication if one thinks of PCSs in this way is that if the system has poor usability, more cognitive effort will be required by users to understand how the system works, which will lead to less cognitive effort being available to create good passwords. In this talk I will outline our understanding of the PCS as an interactive system and discuss some of the work I had undertaken with two PhD students to unpack the implications of approaching the topic in this way.

 

Summer 2017

3rd May 2017: Behaviours as Design Components in Cyber-physical Systems

Michael Jackson (Open University) 

The behaviour of a realistic cyber-physical system is a complex structure of constituent behaviours. These constituent behaviours must be identified, individually developed, and combined into a complex whole that will satisfy the stakeholders’ requirements. This work must precede the design and structuring of software architecture for efficient deployment and execution. This talk explains an approach based on these ideas and presents some of its underlying principles and claimed advantages.

Bio: Michael Jackson has worked in software since 1961. His JSP program design method, described in Principles of Program Design (1975), was chosen as the standard method for UK government software development. Later work with Pamla Zave at AT&T, on telecommunication systems architecture, is the subject of many papers and several patents. More recent work on problem structure and analysis is described in his books Software Requirements & Specifications (1995) and Problem Frames (2001), and in many published papers. He has held a visiting research chair at The Open University for sixteen years, participating in research projects there and with other research and academic institutions.


10th May 2017: Big Data and Games
 
Anders Drachen (York, DC Labs) 
 
The interactive entertainment industry has grown dramatically in the past decade, and recently reached 100 billion USD in global yearly revenue, making it one of the super-heavyweight sectors in entertainment. Some estimates place the number of people worldwide who currently play computer games at 2 billion. Tracking detailed interaction behavior from this number of people results in truly massive datasets. 
 
With the rapid growth and innovation in the sector, business-related data has come to the forefront as a means for informing decision making. Colloqially referred to as "game analytics", the collection and analysis of big data in interactive entertainment has had a direct impact on the interactive entertainment industry within the past few years, as the practice of tracking and analyzing the behavior of players and processes has emerged as a key component of game development in this age of mobile platforms, increased game persistence and non-retail based revenue models. 
 
In this presentation we will take a deep dive into game analytics, covering the background for the current situation in the industry and review the practice of game analytics, covering the fundamental approaches towards problem-solving and the knowledge discovery process inherent in business intelligence work in games. We will talk about the crucial aspect of deciding which behavioral features to focus on and the stakeholders involved in these considerations, and why this is crucial in games. The methods and analyses used - from simple descriptive statistics to machine learning and spatio-temporal behavioral analysis - will be discussed and the kinds of problems the industry is working with on a daily basis outlined. 
 
A central focus of the presentation will be the user, the player, who is alfa and omega for the success of commercial or serious games. 
 
The key takeaway from this informal talk will be an understanding of the importance of business intelligence in interactive entertainment, and of some of the nuts and bolts of behavioral analytics work and the role it plays in game development.
 
Bio: Anders Drachen, Ph.D. is a veteran Data Scientist, newly appointed Chair at the DC Labs, Department of Computer Science, University of York, and Game Analytics consultant at The Pagonis Network. His work in data science is focused on analytics, big data, business intelligence, data mining, economics, business development and user research in the digital entertainment industry. His research and professional work is carried out in collaboration with companies spanning the industry, from big publishers to SMEs. He is the most published expert worldwide on the topic of analytics, data mining, user research and behavioral profiling in digital entertainment. He writes about analytics on andersdrachen.com. His writings can also be found on the pages of trade publications. His research has been covered by international media and recevied multiple awards.

24th May: The Science Research Council's Common Base Policy

Alistair Edwards

This is a story that needs to be told. It combines history, politics, management and technology. In the 1980s the forerunner to the EPSRC established its Common Base Policy. On the face of it, this was a sensible move to standardize computers to be used in research it sponsored. The computer chosen was the Perq. However, it all went wrong.

This seminar will trace some of this history and lessons to be learned, partly from my own perspective as a very lowly position, developing the Pascal compiler that was to be part of the Common Base. This is not a presentation of research, as such, but there will be some technical content, but content which will be suitable for a general audience. It will also be intended to be of equal interest to those of us who remember the 1980s - and those who think that is ancient history.


31 May: From graceful degradation to graceful amelioration: Continuous on-line adaptation in many-core systems

Gianluca Tempesti (Department of Electronics)

Imagine a many-core system with thousands or millions of processing nodes that gets better and better with time at executing an application, “gracefully” providing optimal power usage while maximizing performance levels and tolerating component failures. Applications running on this system would be able to autonomously vary the number of nodes in use to overcome three critical issues related to the implementation of many-core systems: reliability, energy efficiency, and on-line optimisation.

The approach is centred around two basic processes: Graceful degradation implies that the system will be able to cope with faults (permanent or temporary) or potentially damaging power consumption peaks by lowering its performance. Graceful amelioration implies that the system will constantly seek for alternative implementations that represent an improvement from the perspective of some user-defined parameter (e.g. execution speed, power consumption).

Bio: Dr. Gianluca Tempesti received a B.S.E. in electrical engineering from Princeton University in 1991 and a M.S.E. in computer science and engineering from the University of Michigan at Ann Arbor in 1993. In 1998 he received a Ph.D. from the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland. In 2003 he was granted a young professorship award from the Swiss National Science Foundation (FNS) and created the Cellular Architecture Research Group (CARG). In 2006 he joined the Department of Electronic Engineering at the University of York as a Reader in Intelligent Systems. His research interests include bio-inspired digital hardware and software, built-in self-test and self-repair, programmable logic, and many-core systems, and he has published over 80 articles in these areas.


7 June: Some Code Smells Have a Significant but Small Effect on Faults
 
Tracy Hall (Brunel) 

In this talk Tracy discusses an investigation of the relationship between faults and five of Fowler et al.'s least-studied smells in code: Data Clumps, Switch Statements, Speculative Generality, Message Chains, and Middle Man. She discusses the development of a tool to detect these five smells in three open-source systems: Eclipse, ArgoUML, and Apache Commons; the collection of fault data from the change and fault repositories of each system; the building of Negative Binomial regression models to analyse the relationships between smells and faults and reports the McFadden effect size of those relationships. The results suggest that Switch Statements had no effect on faults in any of the three systems; Message Chains increased faults in two systems; Message Chains which occurred in larger files reduced faults; Data Clumps reduced faults in Apache and Eclipse but increased faults in ArgoUML; Middle Man reduced faults only in ArgoUML, and Speculative Generality reduced faults only in Eclipse. File size alone affects faults in some systems but not in all systems. Where smells did significantly affect faults, the size of that effect was small (always under 10 percent). The findings suggest that some smells do indicate fault-prone code in some circumstances but that the effect that these smells have on faults is small. The findings also show that smells have different effects on different systems. The conclusion is that arbitrary refactoring is unlikely to significantly reduce fault-proneness and in some cases may increase fault-proneness.

Bio: Professor Tracy Hall’s research is in software engineering. Her research interests centre on empirical studies, many in collaboration with companies. Tracy's current work is based around research into code faults. In particular work on the prediction of fault-prone code. However her interest in human factors has also resulted in beginning to look at some of the human issues around the errors that developers make in code that result in particular types of faults. Tracy is also interested in the detection and analysis of bad smells in code. Tracy is the Head of Department of Computer Science at Brunel University London. Over the last 20 years she has conducted many empirical software engineering studies with a variety of industrial collaborators. She has published over 100 international peer reviewed journal and conference papers and has been Principal Investigator on a variety of EPSRC projects. Tracy is Associate Editor for the Information Software Technology Journal and the Software Quality Journal. She is also a long standing member on many international conference programme committees. Professor Hall is a member of the EPSRC Peer College.


14 June: Aiming for 4* Outputs in REF

Edwin Hancock 

Focusing on research outputs, I will commence by reviewing the results of the 2014 Computer Science REF, both nationally and at York. Then I will mention differences between outputsin the REF 2014 submission and alternatives currently being considered by HEFCE following the Stern Report. Based on this I will present some personal views on what is likely characterise a 4* paper, and how to construct one.


21 June: Metamorphic Testing for Programming Language Implementations

Alastair Donaldson (Imperial) 
 
I will give a brief overview, with examples, of two effective methods that can be used to test production compilers automatically in order to find defects early: differential testing and metamorphic testing.  I will then go into details of how we have successfully used metamorphic testing to find dozens of bugs in commercial compilers for GLSL, the OpenGL shading language, including both wrong code and security-related issues.  In a nutshell, our method works by generating families of equivalent GLSL shader programs by applying semantics-preserving transformations to a set of initial test shaders, uses fuzzy image comparison to detect cases where rendering has gone wrong, and employs an algorithm similar to delta debugging to minimize a transformed shader program to find a small change to an original test shader that exposes a compiler bug.

Bio: Alastair Donaldson is a Senior Lecturer and EPSRC Early Career Fellow in the Department of Computing, Imperial College London, where he leads the Multicore Programming Group.  He is the recipient of the 2017 BCS Roger Needham Award for his research into many-core programming.  He has published more than 70 peer-reviewed papers in formal verification, multicore programming and software testing, and leads the GPUVerify project on automatic verification of GPU kernels, which is a collaboration with Microsoft Research.  Alastair coordinated the FP7 project CARP: Correct and Efficient Accelerator Programming, which completed successfully in 2015.  Before joining Imperial, Alastair was a Visiting Researcher at Microsoft Research Redmond, an EPSRC Postdoctoral Research Fellow at the University of Oxford and a Research Engineer at Codeplay Software Ltd.  He holds a PhD from the University of Glasgow.