Peer Review

Time to take a look at our peer groups’ blogs again.

Automation Testing Environment: They have completed their project, congratulations! According to their posts it seems that they found pitching easier in the second showroom event and also everything else went slightly better than in the ICT Showroom. Having participated in these events ourselves also, it might’ve been better to have the Capstone Showroom first since the ICT Showroom had a lot of representatives from different companies where as the Capstone Showroom did not.

If Insurance: It’s been more than a month already since they last updated their blog but at least ICT Showroom seems to have gone well. Their project owners were satisfied with their work so that’s definitely good news!


Analytics Overview

We haven’t talked much about our Analytics API/Machine Learning Model, but since the model was successfully deployed as a web service last week, now is a perfect time!

During the project, alongside with everything else, our team has been collecting as much actual room occupancy data from various classrooms in ICT City as we possibly could. Of course it has been largely manual work, with someone from the team having to bring in a sensor and stay in the room to count occupants -usually the students attending the class, and then putting all the data together in excel to be used later. Because of the limitations of our time and resources we didn’t get quite as much data as our ML model might have needed, but enough so that we got some nice analysis results anyway!

We decided to use Azure Machine Learning Studio as our Analytics API, since creating and testing different process flows and models in there is very easy -and the tool is actually free to use as well, so the testing didn’t make any costs!

Training version of the machine Learning model created in Azure ML Studio



We ended up using a Neural Network Regression model, which predicts the room occupancy from CO2, temperature, humidity, light, motion and classroom category. The classroom category variable is assigned by the team to accommodate for the fact that different classrooms and differentiate quite a lot from each other (Think about it like this: 20 people in an IT classroom vs 20 people in Alpha auditorium? Definitely not the same thing!).

So, how were the results then?

Scoring table we got from testing our newly trained model.

Actually not that bad considering the small amount of data (we only had around 700-900 rows, I think?). In the table above, ‘Occupancy’ is the actual value in the testing data set and ‘Scored Labels’ are our model’s calculated predictions. There are few missteps, but the overall accuracy is exactly what we hoped -while we can’t tell the amount of occupants as exact number, we can tell if the space is empty, slightly used or more or less full. With more data, we’d expect even more accurate results!

Also since the amount of data we have is still quite small, there is some bias to it and there are more than few cases which the model simply can’t understand without gathering more data and re-training the model. However, given the amount of hours we were able to use for data gathering, we are very pleased with the results and with the fact that we can use the model in our demo in Capstone Showroom!

ICT Showroom reflections

Better late than never, right? We got a little carried away with work and forgot to post this update.

  • How did it go ?
    Excellently! Our stakeholders were very interested in the project. We got some great connections, including Vesa Taatila. There were also some company representatives who were very interested in the project. Even though we didn’t win the competition, we are very happy about how the event went and think that the interest and connections we got are more important than any competition win.
  • How did we manage?What was good and what was not so good?
    Overall things went smoothly. We had organized things well and the morning didn’t fall into complete chaos. However we had some technical difficulties with the system that had to be patched up early in the morning. Maybe next time we should try to get the new version up and for testing a little earlier to prevent this.
  • What did you learn out of organizing & participating in the Showroom?
    Not much on our part. Have to mention though that the event could’ve been organized better from TUAS side as many of the groups (us included) had hard time trying to figure out where the place for our stand was.
  • What did you learn out of the other projects?
    We had honestly very little time to get to know other projects. However, motion detection with opencv was interesting in machine learning for traffic flow based marketing -project. Also it seemed like a lot of the projects weren’t quite as ready as we thought they’d be.
  • What would you do differently next event?
    Have something to catch the eye of the average event goer. While our target group was really interested (which is great!) we didn’t get a lot of random interest from students etc. Luckily we had the huge TV screen with our GUI open for testing which seemed to get some attention. Al
  • Other notes
    The leftover loot was shared among the party members. Also, some pictures below!

Peer Review

Okay time to take a pre-ICT Showroom look at the blogs of our peer groups!

Automation Testing Environment: Content keeps landing in steadily and both it and the posts show that the group is still enthusiastically working on the project! Posting the poster alongside the other content is a nice touch and we are looking forward in seeing this project in the Showroom.

If Insurance: Blog hasn’t been updated since February. Hopefully the project is still doing good and is ready for Showroom despite of the inactive blog.

As a sidenote: Sorry for our pitching video not working! It has been hopefully fixed now. 🙂

Pitching Video

Never fear, our pitching video is here! Since the team happened to have some skills in video and audio editing, we put in some work in the editing of this video. The goal was to highlight the benefits the project outcome provides to organizations and companies utilizing the finished system.


Oh yeah and for the curious here are some making-of pictures from our setup! The video was filmed in the Information Security Lab by the way.


Our first sensor test results are out!

This slideshow requires JavaScript.

We have taken our LoRa sensors to different classrooms to get some sample data. We are happy to see some change in sensor values when people enter the classroom, especially CO2 and motion sensor values seem useful for our use case. Here are some visualizations from different test cases. We still need a lot more data for accurate predictions, but this is a promising start!

Towards service implementation

This sprint composed of several tasks in preparation for the service implementation. We are currently researching, designing and building the service architecture. On the other hand, we are putting together ideas and needs for the technology comparison between the LPWAN technologies our sensors use.

I think it’s weird to speak of week 3’s main outcome, considering that the current sprint constitutes of weeks 3 and 4. There should be absolutely no need for inter-sprint outcomes. That being said, the meeting with Radiolab and acquiring the LoRa sensors was very important on previous week. As well as the advances made in GUI design.

For this week we had team members tell a little bit which parts of the project they have been working on. Our tasks were mostly given to teams of 2 but this should give some kind of a picture anyway:



As we want to visualize occupancy data with interactive map of the ICT-City, we needed the floor maps as SVG files. Previously we had made a preliminary version which was mostly used for testing the UI technologies and design, but for the first sprint our aim was to get more or less final versions of at least two first floors. Luck being on our side we got to know that similar kind of maps had been previously been made for all of the floors of ICT-City. We acquired these maps and needed to only put couple hours of work to make whatever adjustments we needed to get them to look and work how we needed them to. Next step for the map is to put the final version up to the GUI and make whatever little changes we’ll need.


Since we want to make sure everyone in the team gets to learn how to work with the cloud platform as well as work with whatever instances we decide to use, we took a closer look both into how we should organize the Azure instances and how we should handle user rights in Azure, since some of the team already have access to the Azure subscription account.

It was easy to decide that we should dip all the needed instances into the same resource group and handle user rights on resource group level since that makes it fairly convenient. Additionally, we thought it best to first give ‘read’ access to everyone in the group and then request other rights to specific instances as needed since Azure user roles do not exactly offer us what we’d really need.


GUI Design

The graphical user interface for the end user has been designed and a preliminary implementation exists which is constantly being worked on. The GUI will be implemented with VueJS and Vuetify Material Design Component Framework.

Sensor Testing

Test cases have been designed and will be implemented for LoRa sensors next week. The goal is to test the accuracy of the sensors in the classrooms. Records are kept of the circumstances in the classrooms such as the amount of people in the room and sensor placement.


Azure infrastructure design

We researched Azure infrastructure possibilities for our service. We’d very much like to use different kinds of services offered by the platform. There’s a lot of choices here. Everything from Event and IoT-hubs and Azure functions to machine learning related services and virtual machine instances. We decided not to rush this task ready. It’s particularly important to take some time with this because our possible data storages depend on the architecture chosen.

Azure data receiving end-point

We deployed an Ubuntu 16.04 based data receiving server instance in Azure. At this point this is mostly for testing purposes but depending on infrastructure design this server will have some role in the final implementation.

Service interface design

Without a clear picture of the Azure infrastructure we were only able to design the interface between clientside and Azure. This is a simple JSON format in which the data is transfered to the frontend.



Technology Comparison

I began with looking for technology comparisons made by others that were similar to what we are trying to compare. Based on those and earlier discussions I made an early version of it. It was shown to the group and we got feedback for it. I then tried to clarify some points, removed some unnecessary ones and condensed other points. Finally me and Riku made the final changes to it trying to make it clear and understandable. For now we have a way to compare the technologies that we will make adjustments if necessary.

Kick-off for 2019

We begun the year by putting together a project backlog on tuesday and continued with scrum sprint planning and splitting backlog tasks to sprint sized bits. We’ll be beginning with basic versions of Azure infrastructure and server backend as well as graphical user interface design. Research on IoT-sensors will be continued.

Speaking of IoT-sensors, we got our first set of LoRa sensors from Radio laboratory after our meeting with Jani Auranen on tuesday. The meeting provided us with a good set of information about the actual sensors devices and of the possibilities of acquiring them.