UGSRP 2018 Talk Series Schedule
First Talk: Nicole Sultanum (Supervisor: F. Chevalier)
Title: More Text Please! Understanding and Supporting the Use of Visualization for Clinical Text Overview
Abstract: Clinical practice is heavily reliant on the use of unstructured text to document patient stories due to its expressive and flexible nature. However, a physician’s capacity to recover information from text for clinical overview is severely affected when records get longer and time pressure increases. Data visualization strategies have been explored to aid in information retrieval by replacing text with graphical summaries, though often at the cost of omitting important text features. This causes physician mistrust and limits real-world adoption. I will present our investigation into the role and use of text in clinical practice, and reports on efforts to assess the best of both worlds—text and visualization—to facilitate clinical overview. We report on insights garnered from a field study, and the lessons learned from an iterative design process and evaluation of a text-visualization prototype, MedStory, with 14 medical professionals. The results led to a number of grounded design recommendations to guide visualization design to support clinical text overview.
Second Talk: Zhen Li (Supervisor: D. Wigdor)
Title: HoloDoc:Enabling Mixed Reality Sense Making
Abstract: In this video, we present HoloDoc, a mixed reality system that provides rich interactions while working with physical documents. When working with physical documents, HoloDoc tracks a user’s pen strokes and hand gestures to enable users access to digital functionality such as hyperlinking and search within physical documents.
First Talk: Felipe Ferreira (Supervisor: S. Cook)
Title: Lower Bounds on Branching Programs Solving the Tree Evaluation Problem
Abstract: I will begin the presentation by introducing the P vs. L problem to provide the motivation for our research. Then I will describe the Tree Evaluation Problem (TEP for short), which is a problem we believe can be studied to solve the P vs. L problem. I will explain the restrictions we have made on the branching programs that solve the TEP in order to simplify the problem. I will then very briefly describe our results and give a very high level explanation of the proof we are currently working on. The presentation will finish with a sketch of the next steps of our research.
Second Talk: Devamardeep Hayatpur (Supervisor: D. Wigdor)
Title: Free-form Navigation and Object Manipulation in VR
Abstract: Firstwe will look at a 7 degrees of freedom free-form navigationsystem created in VR. Then, we demonstrate problemswith using a free-form navigation system for precisetransformations. We explore a set of rich interactions thatallows the user to perform free-form navigation as well asconstrained / precise navigation. Finally, we will look atways to integrate object interaction and manipulation intothis system.
Third Talk: Qiongsi Wu (Supervisor: G. Pekhimenko)
Title: Clang/LLVM 101
Abstract: We talk about the basics of compilers, what a compiler is, and how it is usually constructed. We briefly go through the Clang/LLVM C/C++ compiler infrastructure. We show that compiler can be helpful in debugging and program transformation in surprising ways. We demonstrate the optimizations that can be performed by LLVM, specifically, loop unrolling and certain local optimizations, by analyzing an example of the simple computation of the sum of an array. Lastly, we discuss briefly the open problems involving LLVM and OpenMP and give some hints for future research.
First Talk: Usman Masood Sadiq (Mentor: M. Brunet, Supervisor: A. Anderson)
Title: Using Chess moves to assess human decision-makingAbstract: I will start by introducing how chess can be used as a model for human behaviour. Then I will go on to the concept of relative risk and how we use it to find out the quality of a move. Then I will go into more detail about the project - how we take unbiased data from online chess games to determine the actual probability of winning based on this large sample and how it changes by altering some factors. Finally, I will talk about how we would use this data to predict when an individual would make a “bad move” and what measures we take to classify a move as a bad move.
Second Talk: YueLan Qin (Supervisor: F. Rudzicz)
Title: Detecting and Reducing Gender Bias in the Word Embedding
Abstract: I would like to share an interesting NLP paper with you. I would first talk about how gender bias is captured in the word embedding, then introduce two debiasing algorithms and finally discuss the result of experiment.
Third Talk: Siqi Hao (Mentor: P. Vicol, Supervisor: R. Grosse)
Title: Musical Analysis by Synthesis
Abstract: I will first introduce the analysis-by-synthesis problem and the motivation for this research. Then, I will explain several relevant methods and models which are useful for solving this problem, including Karplus-Strong algorithm, constant q transform, WaveNet and Wavenet Autoencoder. Finally, I will briefly discuss some possible approaches that use the models mentioned above.
"How to Write Great Research Papers" presented by Hervé Saint Louis
First Talk: Christina Chung (Supervisor: D. Wigdor)
Title: Meaningful PlayAbstract: In human computation games (HCGs), players perform realworld tasks as a by-product of playing a game. Since these tasks typically require domain knowledge, game designers map them onto simpler ones that non-experts can tackle. Details about the underlying task are often not revealed, leaving players oblivious to how they are truly making a contribution. One outstanding question is whether revealing such details can promote player motivation. This paper presents a study carried out on Amazon’s Mechanical Turk (AMT) workers using the MATCHMAKERS HCG. The study examined the impact of context disclosure at four levels of granularity: from zero context, the task’s significance, the description of the task, to how the task is accomplished through game mechanics. Based on the study’s findings, AMT Workers were most motivated when given the task description. This work aims to provide insights into how context can be used to better motivate players of HCGs.
Second Talk: Graeme Stroud (Supervisor: A. Farzan)
Title: Parallelizing Sequential Code Using Auxiliary Variable Synthesis
Abstract: Many sequential programs on lists are not suitable for divide and conquer parallelism, but can easily be modified to make them so. It may be necessary for the sequential program to have auxiliary variables, so when the list is partitioned into two and the program is run on these two lists, the final result for the original list can simply be computed using the final values of the variables from the two lists. The auxiliary variables, and the procedure to combine the partial results, can easily be found manually for many common problems, but finding them automatically seems to be a difficult algorithmic problem. Examples of sequential code for computing scalar-valued functions on lists will be presented, along with current search-based algorithms to find the auxiliaries and combining procedures for these types of problems.
Third Talk: Lipai Xu (Mentor: S. Mortazavi, Supervisor: E. de Lara)
Title: Tracking an Object with a Drone and Edge Computing
Abstract: The first part of my presentation will be a brief talk about our research, and an introduction to edge computing, such as the architecure and purpose of the edge computing. Then I am going to have a more detailed intro to our project, which may include our approaches to the problem of network connection, as well as the computer vision techniques we are using currently. Finally, the last part will be about our future plan and possible moves.
First Talk: Yoona Park (Mentor: S. Jeblee, Supervisor: G. Hirst)
Title: Predicting causes of death with Deep LearningAbstract: Verbal Autopsy is a method of classifying causes of death by gathering health information from non-formal written records of medical history. Ongoing research on Verbal Autopsy focuses on employing natural language processing combined with machine learning techniques to achieve higher accuracies for predicting causes of death. The most prominent model used for prediction so far is Convolutional Neural Network (CNN), one of the most popular models for image processing. Our current research aims to extract the best features from the written records so that the model can understand the sequence of events leading up to death and make the best prediction as possible.
Second Talk: Mohamed Moustafa (Mentor: B. Harrington, Supervisor: B. Schroeder)
Title: Notes on a Cap-sized Classroom – An Alternative to the Flipped Classroom
Abstract: Instructors are constantly looking for ways to help their students succeed. One of the major issues in Computer Science Education is retention rates in introductory Computer Science courses (CS0/1/2). In this presentation I will discuss an experiment that we have been running at the Scarborough Campus. This experiment uses traditional lecture and course components and supplements them with components and methodologies used in flipped classrooms. We aim to use the best of the two methodologies (traditional and flipped classrooms) to give students the best opportunity at succeeding in Computer Science.
Third Talk: Isaac Waller (Mentor: M. Brunet, Supervisor: A. Anderson)
Title: Generalists and specialists:quantifying activity diversity in online platforms
Abstract: In many domains of human endeavor, people must choose how broadly to allocate their energy. Is it better to concentrate on a narrow area of focus, and become a specialist, or to apply oneself more widely, and become a generalist? In the past century, there has been a trend of increasing specialization in professional domains, including business, medicine, and academia, while in the past decade, there has been a rise of generalists, especially in the tech industry. However, the terms "generalist" and "specialist" are used loosely, and a rigorous definition of them is unclear. In this analysis, we develop a principled measure of 'generalism' and 'specialism', and apply our measures to user activity on Reddit. We find that 'generalist' and 'specialist' communities differ in many significant aspects, and that specialist users are on average significantly more 'successful' (by post score) when they are commenting in specialist subreddits.
Fourth Talk: Samin Khan (Supervisor: I. Ahmed)
Title: Identifying psychotic symptoms through social media data and using an ethnographic approach for providing access to mental health resources through technology
Abstract: An overarching goal of human-computer interaction (HCI) is to better optimize the human experience through technology. A large part of the industry has been focused on improving the experience of users with their devices rather than the optimizing the experience of the user's personal lives - specifically the population of users most in need of particular resources. We will first discuss the literature on developing technology for access, infrastructure, freedom and visibility for marginalized populations. As motivation for the current research, we will look into the possibility of a rising mental health crisis in Western culture and the role of cyber social media platforms. There have been databases and algorithms developed to identify psychotic symptoms in social media users based on their data. We will look into what the current barriers are and how we can overcome those as well as exploring the value of an ethnographic approach to designing an interface for a user to access necessary resources.
First Talk: Loora Zhuoran Li and Peiqi Wang (Mentor: M. Chiu, Supervisor: K. Jackson)
Title: Monte Carlo Simulation on Credit RiskAbstract: Credit risk is among the most fundamental risks for financial firms to manage. It refers to the risk associated with possible losses due to defaults of creditors in a given portfolio. One key question associated with credit risk is: what is the probability that the loss will exceed a certain threshold? Monte Carlo simulation is often used to approximate this loss probability. However, risk managers are often most interested in the probability of a large loss. These large losses are rare events and so the associated probability is small and computationally expensive to estimate accurately by simple Monte Carlo simulation. Hence, risk managers often use importance sampling techniques to speed up the computation. In this presentation, we will start by introducing Monte Carlo simulation for the computation of the loss probability under the Gaussian Copula Factor Model. Then we will explain how to apply importance sampling in this context. Finally, we will discuss our experiments and findings so far, along with possible next steps.
Second Talk: Kayman Brusse (Supervisor: P. Marbach)
Title: A Model for Community Health in Online Forums
Abstract: Communities have always been a fundamental part of social networks and our society in general. Previous investigation into a formal model for information communities has suggested that certain parts of a community are fundamental to its success. Our goal is to develop a new model focused on communities within online forums. By studying the differences between healthy and un-healthy forums, we hope to gain further insight into why only some communities are successful. We construct the model by comparing the group’s users that participate in threads, we create a hierarchical structure that can be interpreted as a directed graph. We analyze this as a model for communities in the context of various forums. We then give several ideas on how to interpret community health inside the model and how it could be used to create metrics that quantify the health of a forum.
Third Talk: Jenny Xuchan Bao (Mentor: G. Zhang, Supervisor: R. Grosse)
Title: Introduction to Model-Based Reinforcement Learning and Potential Approaches for Improvement
Abstract: Reinforcement learning is a branch of machine learning, where agents explore the environment and seek to maximize some associated reward. Deep reinforcement learning has enjoyed great popularity in recent years. It has achieved impressive results on various tasks, including controlling humanoid robots and mastering the game of Go. Most of these were achieved through model-free reinforcement learning, where we do not seek to build a model of the environment. Model-free reinforcement learning algorithms are generally applicable and requires little tuning, but suffer from high sample complexity. In contrast, model-based reinforcement learning algorithms aim to build a model of the environment, as well as learning a policy. Having a model of the environment will greatly reduce the sample complexity, giving it huge potential in real world applications where low sample complexity is especially crucial. However, model-based algorithms tend to suffer from model bias, and so far have only succeeded in tasks with restrictive forms of the learned models. This presentation will introduce the different approaches to reinforcement learning, the state-of-art literature on improving the performance of model-based reinforcement learning, as well as the research that my team is leading.
First Talk: Caroline Boyue Hu (Mentor: A. Grubb, Supervisor: M. Chechik)
Title: Preference in Decision Making with Goal ModelsAbstract: In Goal-Oriented Requirements Engineering (GORE), stakeholders model intentions, system requirements and constraints of their projects with Goal Models. Analysis of goal models helps stakeholders understand and evaluate potential project scenarios and help them make trade-off decisions at an early stage. I am going to go over an example of goal model and how analysis is done on this model. I'll introduce the algorithm we use in the analysis phase, Constraint Satisfaction Problem (CSP), and the state space explosion problem with CSP. We aim to improve the algorithm with information of the context in goal models. We want to identify properties in the model building stage to determine user preferences and then use the preferences to reduce the state space. I will be going over an example to illustrate this. Second Talk: Robert Li (Supervisor: Y. Xu)
Title: Modelling historical growth of adjective usages
Abstract: Languages change over time, and such change is well reflected in the extension of word usages. Take the word “cool” as an example. It used to simply mean “having a low temperature” as in “cool liquid”, but then it is extended to mean “attractive or impressive” like in “cool idea”. Much work has been done in explaining such usage extension. George Lakoff proposed the chaining theory, which states that a noun classifier or a preposition could be used with a noun that is dissimilar from the central or conventional nouns used in the category of the classifier or preposition, since that new noun is linked by a series of intermediate usages, each stepping further away from the central nouns. Modern efforts of computationally modelling this theory has yielded results suggesting that chaining is a cognitively optimal and efficient form for emergence of word senses to take. This research aims to model the historical growth of adjective usages by leveraging the nearest neighbor chaining algorithm. The model computes the probability for a given adjective to pair up with a noun and chooses the most likely adjective as the noun’s future category. When the model is fully constructed, the accuracy of said model will be checked by making the model predict adjective-noun pairing in decade t using data from decade t-1.
Third Talk: Alex Jaewon Shin (Mentor: D. Liaqat, Supervisor: E. de Lara)
Title: Tracking a Predefined Object with a Drone using Edge Computing
Abstract: Internet of Things (IoT) devices are predicted to generate over $300 billion annually by 2020. Drones, a prime example, have been growing in popularity in various sectors. However, there is a limitation to their processing power, which restricts their usability. We approach this problem through a mobile application we design, which instructs a drone to detect an object. However, the image processing computations on the drone may cause a significant level of latency. We believe a solution would be to offload the computations to a nearby server, so more data can be processed. This is an example of a simple edge computing model. However, the transfer of data from the drone to a server would increase the latency. As a result, we propose to develop different versions of the software to test latency and performance. In this presentation, I will first introduce the concept behind IoT and how it connects to edge computing. Then, after showcasing other research examples, I will talk about the current development of our design-based research and its findings.
Emmy Liu (Supervisor: Y. Xu)
Title: The Efficiency and Typological Features of Numeral Systems
Abstract: Numeral systems across different languages and cultures differ in how they represent the natural numbers. Some languages have an extremely restricted set of numerical terms, or have only approximate terms(represented as gaussian functions). Other languages have recursive systems with various bases and irregularities.This talk will cover the results of a study replication showing that all numeral systems reflect a functional need for efficient communication using minimal cognitive resources, and so represent a trade-off between complexity and cost in a cognitive dimension. The second half of the talk will cover several questions that are not adequately explained by the current model, such as the non-viability of systems such as binary, and the implicational universals found in word order in numerals. In order to answer these new questions, the Uniform Information Density and Rapid Information Gain hypotheses will be explained and tested.Finally, other possible extensions to the current model will be discussed, along with the viability of extending these hypotheses to cover other systems of communication.
First Talk: Silvia Gonzalez Sellán (Supervisor: A. Jacobson)
Title: Computing Morphological Operations on Surfaces Using geometric flows
Abstract: Morphological operations arise in computergraphics, computer vision and even physical processes such ascrystal evolution. The simplest operations are dilation --- wherea shape grows outward --- and erosion --- where a shape shrinksinward. More complex operations can be designed by interweavingerosions and dilations. For example, dilating by a small amountand then eroding by the same amount will lead to a shrinkwrapeffect called the "closing". Unfortunately, for a shape with acomplex surface the typical volumetric representation -- a gridstoring whether each point is inside or outside -- must be veryhigh resolution to avoid staircase-like defects. We alleviate thisby defining complex morphological operations directly on thediscrete surface representation of a shape using PartialDifferential Equations (PDEs). While our method is yet in itsearly stages, we show some very promising results as well asits perceived limitations.
Second Talk: Bob Cui (Mentor: B. Beekhuizen, Supervisor: S. Stevenson)
Title: Understanding English Homonyms with Matched “Pseudohomonyms"
Abstract: Homonyms are ambiguous words with multiple unrelated meanings (e.g. bat, tip, fan). The existence of homonymy makes language confusing and possibly inefficient, yet hundreds of homonyms remain in the English lexicon over history. The properties of homonyms which allow them to survive in a language, despite their ambiguity, are of great interest to researchers in cognitive science, linguistics, and computational linguistics (CL). One computational approach to study homonymy is to create “pseudohomonyms” by merging two words in a corpus, so that they become one token sharing the two unrelated meanings. Vector representations of pseudohomonyms can be learned from such an altered corpus so that the properties of ambiguous words can be studied in a controlled manner. Some existing work has explored the vector properties of ambiguous pseudowords by combining random words, but the importance of ensuring that these pseudowords resemble real homonyms has not been recognized. We attempt to bridge this gap by matching pseudohomonyms to real homonyms on a range of psycholinguistic properties, and compare the embeddings of these matched sets of words. This will allow us to uncover the important properties that define real homonyms – what exactly allows these ambiguous words to exist in the English language.
First Talk: Cem Anil (Mentor: J. Lucas, Supervisor: R. Grosse)
Title: Estimating Wasserstein Distance from Samples – from Above and Below
Abstract: The concept of Generative Adversarial Networks (GAN) has been in the spotlight of the machine learning/deep learning community ever since it was proposed in 2014. GAN training involves two neural networks: a generator network that is trained to generate “fake” data that resembles data from the training dataset, and a critic network that is trained to tell whether a data point was sampled from the training dataset or was generated by the generator network. In this work, we aim to develop tools to evaluate how well the generator is able to mimic the training data by calculating the Wasserstein distance (also called Earth-Mover distance) between their respective probability distributions. We will discuss:
- What Wasserstein distance is and why it is a powerful performance metric to evaluate generator networks.
- How Kantorovich-Rubinstein duality can be used to turn the Wasserstein distance computation into a search over Lipschitz-1 functions – functions whose derivatives are within the range [-1, 1].
- What strategies we can devise to get reliable upper and lower bounds on the Wasserstein distance between probability distributions using only samples from them.
- Whether, and under which conditions, neural networks can be used as universal Lipschitz function approximators.
Second Talk: Eris Jiayi Zhang (Mentor: N. Sultanum, Supervisor: F. Chevalier)
Title: Pimp up Your Vis: Extracting Visual Patterns from Images to Stylize Data Visualizations
Abstract: Recent years have seen an increasing interest in the authoring and crafting of personal visualizations, both as a form of artistic expression or as a way to track and externalize personal experiences. Nevertheless, to produce such visually appealing results, existing visualization toolkits and charting libraries not only require a fair amount of creativity, but also the proficiency in programming or a good command of vast functionality in a full-featured user interface. In this project, our goal is to develop a system, helping users create whimsical and custom visual representation of information, where real images or paintings can instead act as inspiration or examples.The pipeline involves three main stages: 1) the semi-guided extraction of relevant features of an image aided by computer vision techniques; 2) the editing and refining of these extracted features through freeform interactions to turn them into reusable glyphs; and 3) the binding of the extracted glyphs' graphical properties to data to create meaningful visualizations.
Jia'Ao Sun (Supervisor: Y. Xu)
Title: Spatial Concept Learning from Images and Text
Abstract: Spatial terms such as "in" and "on" are used in everyday language. However, learning these concepts is difficult for computer programs, due to the fact that a single spatial term can be used in a variety of spatial scenarios. For example, the scene "book on the table" is quite different from the scene "picture on the wall", or "light on the ceiling", even though "on" correctly applies for all three scenarios. The problem we are exploring is how spatial concepts can be machine learned. Spatial relationships between objects can be easily identified by humans based on our experiences, sense of depth, and perspective. Yet, given the same image to a machine, identifying the spatial relationships becomes a challenge. We attempt to solve this problem by taking a cognitively inspired approach. We want to design a probabilistic model that can learn spatial relationship schemas for different prepositions by using information from word embeddings for context, and the relative locations of the objects' bounding boxes for a spatial sense based on the presumption that context and images might inform about spatial categories separately.
Fourth Talk: Sami Fassnacht (Mentor: S. Mouatadid, Supervisor: S. Easterbrook)
Title: The Arrhenius Project: Reconstructing theFirst Climate Model
Abstract: In the 1890’s, Svante Arrhenius developed the first climate model to accurately predict changes of the earth’s temperature in response to different atmospheric carbon dioxide levels. Despite potential errors in Arrhenius’ work by other climate scientists, few have actually taken a deeper look at a few of these shortcomings. However, no one has analyzed the severity of these errors or how these impact each other within the context of the model. This project is the first to produce a full replication of Arrhenius’ model to tackle this issue. To complete our analysis, our project will run Arrhenius’ model with modern sources of data. The results of this project will shed a clearer light on the accuracy of the first climate model, and the completed model will be released as a public educational tool to learn about climate modeling.
First Talk: John Xu (Supervisor: Y. Xu)
Title: Investigating scene balance in photographs using computational and handcrafted methods
Abstract: The English lexicon has grown substantially since the Anglo-Saxon period. In particular, many new words (e.g., croggy) have entered the lexicon over time. Despite extensive research on word forms and language change, it is not yet understood what general principles might underlie the emergence of word forms. We propose that historical word emergence reflects the principle of cognitive economy, by trading off between ease of lexicalproduction and ease of lexical distinction. We test this theory computationally by examining lexical neighbourhood density of emerging English words and gap words over the past 200 years. Our preliminary results show support for our proposal and suggest non-arbitrariness in the emergence of English word forms, despite external influences that co-shaped the English lexicon.
Second Talk: Yoori Choi (Mentor: S. Tsogkas, Supervisor: S. Dickinson)
Title: Parts-based characterization via Hamiltonian spectral geometry
Abstract: In computer vision, matching 3-dimensional non-rigid shapes is one of the typical, yet interesting problem. In spectral shape analysis (i.e. the automatic analysis of geometric shapes), eigenvalues and/or eigenfunctions of Laplace-Beltrami operator (LB0) are mostly used due to its isometric property. LBO gives a global characterization of a shape preserving information up to isometric transformation. However, it cannot extract geometric signatures which are localizable to arbitrarily shaped regions. This research aims to explore ways to describe or extract features from shapes in a parts-based manner via localized Hamiltonian spectra. The Hamiltonian operator is based on the Schrödinger equation in physics. We want to use this to the tasks such as partial shape matching, matching to RGB-D images, extracting features for machine learning, and etc.
Third Talk: Farwa Khan (Mentor: S. Tsogkas, Supervisor: S. Dickinson)
Title: Investigating scene balance in photographs using computational and handcrafted methods
Abstract: Photographers and amateurs alike will benefit from means to increase the quality of photographs taken through automatic image cropping methods. Current methods of aesthetic evaluation use convolutional neural networks to determine the scores of photographs. However, given that aesthetics is a subjective concept, special attention needs to be given to low level features that define the underlying structure of photographs. We propose an analysis of the features pertaining to scene balance by analyzing line drawings and edge maps of photographs to gain a more detailed understanding of what humans find aesthetically pleasing and displeasing. Using knowledge of key photography features, we hypothesize that the proportion of balanced elements contained in a photograph correlates with its aesthetic appeal. We train singular and pairwise networks on the edge maps and predict independent and relative aesthetic scores. We aim to show that the accuracy of the edge maps is equivalent to that of colour photographs trained originally. This will thus confirm that balancing elements can determine the aesthetic appeal of a photograph without requiring the extraneous information that colour photographs contain. This verification of our hypothesis will allow us to further investigate the properties of symmetrical components and hand-crafted features that contribute to scene balance in photographs.
First Talk: Alex Chang and Abhishek Moturu (Mentor: J. Calver, Supervisor: K. Jackson)
Title: Creation of Synthetic X-Rays to Train a Neural Network to Detect Lung Cancer
Abstract: The purpose of this research is to create effective training data for a neural network for lung cancer detection. Since X-rays are a relatively cheap and quick procedure that provide a preliminary look into a patient's lungs and real X-rays are often difficult to obtain due to privacy concerns, creating synthetic frontal chest X-rays using ray tracing on approximately 100 chest CT (Computer Tomography) scans can provide a large, diverse training data set. This research project involves lung segmentation to separate lungs within CT scans and randomize nodule placement, nodule generation to grow nodules of random size and radio density (in Hounsfield units, or HU), ray tracing to create X-rays from CT scans from several point sources using Beer's Law, image processing to produce realistic X-rays with uniform orientation, dimensions, and contrast, and analyzing these various methods and the results of the neural network to improve accuracy when compared to real X-rays, while reducing space complexity and time complexity. This research may be helpful in detecting lung cancer at a very early stage.
Second Talk: Raymond Gao (Supervisor: D. Levin)
Title: Poking Things and Getting Data
Abstract: Using an in-house simulation for data generation and exploring various methods of fitting we conduct a feasibility study on extracting the stiffness matrix from force and position data. If it is possible then the research here and research in similar spirit done elsewhere will serve as a foundation for potential technologies in various fields not limited to: robotics, V.R., and material identification. If it is not possible then the research will serve as a guideline for pitfalls to avoid. In this talk we will go over my experience as a first time undergraduate researcher, the methods we tried (what did work and what did not as well as why), and the mathematics that serve as a foundation for the study.
Third Talk: Alex Hurka (Mentor: S. Mouatadid, Supervisor: S. Easterbrook)
Title: The Arrhenius Project: A Reconstruction of the Arrhenius Model of Climate Change
Abstract: Swedish chemist Svante Arrhenius is widely regarded as the ‘father of climate change’ for his early work in computational climate modelling. Even though his 1896 model sparked an important new branch of climate science, its properties are not well known and the reasons behind its wildly varying accuracy have not been investigated. This is largely because no attempts have been made to fully reconstruct and analyze Arrhenius’ original model. In this project we explore how Arrhenius’ climate model performs on varied datasets, and identify how its simplistic representation of atmospheric dynamics affects the accuracy of its results. We present a programmatic reconstruction of the model, with extensions such as a multi-layered atmosphere representation and support for modern atmospheric data inputs. Analyzing this reconstructed model will shed light on sources of error in Arrhenius’ original model and improve the literature on its computational process. Additionally, we see potential in the reconstructed model as an educational tool for early climate science learners.
Fourth Talk: Bence Linder (Supervisor: S. Engels)
Title: Genetic Algorithms in Game Testing
Abstract: The process of balancing a game’s mechanics, maps, and AI can be a complicated process requiring many hours of user testing. Often times games will have to be tuned constantly after their initial release, since the large player base can provide sufficient user data to make these changes. Ideally, it would be nice to have a simulated player population that responds to changes in the game in a way which is at least somewhat representative of a real player population, which would provide insights into the effects of certain changes before user testing needs to be done. We take the first steps in exploring this idea by using a genetic algorithm to tune the parameters of an AI behaviour tree in a classic game of deathmatch. By changing small values in the game mechanics and observing the resulting local optimum parameter configurations, we have succeeded in building a simulated player population which responds to changes in game balance,which has provided many valuable insights into the game, map, and AI implementation.