Challenges

At the current stage we opened a call for industrial challenges from companies. Here are several examples of challenges:

You are welcome to submit a challenge at a.naumchev@innopolis.ru

SOFTEAM. Application of Product Lines Engineering for Modelio Tool Suite

Context: Modelio is a tool with a capability of modeling in different domains from embedded systems to enterprise architectures. Modelio team manages the core tool for various platforms (Windows, Mac, Linux, x32, x64) as well as 50+ domain specific modules. Customers often require a dedicated solution with given a set of features, constraints and requirements. Building a new package may require a lot of human effort that affects a time-to-market for a new solution.

Goal: Product Lines Engineering (PLE) is a promising method to deal with variability. SOFTEAM would like a team to experiment with Product Lines Engineering tools such as FeatureIDE, BUT4Reuse and pure::variants in the Modelio context, see example here. Ideally, the team would propose a solution to extract a feature model from Modelio artifacts such as creating a specific adapter for BUT4Reuse to parse Modelio modules configuration files and extracting dependencies. The team would demonstrate the PLE in practice.

Difficulty level: beginner to intermediate

Requirements for the participant:

  • Knowledge about Java and XML

Related tutorial: RE-ENGINEERING SOFTWARE VARIABILITY INTO SOFTWARE PRODUCT LINES

Rostelecom IT 1. Enterprise Architecture Analyze and Validation

Context: The approach of modeling enterprise architecture, an IT systems in particular, provides the development team with significant benefits as it takes on the role of a framework for system design and requirements traceability. However, this approach raises new challenges as well, the most important of which are analysis and validation of the enterprise model. For the model of a large enterprise system, the number of elements and connections grows rapidly in the course of its development; thus it becomes extremely hard to analyze and maintain the model manually.

Goal: To find a perspective solution for an enterprise model analysis and validation, i.e., missed derived relations, wrong semantic structure, patterns violation, etc.

Difficulty level: Intermediate - High

Description: Within the challenge the participants should get familiar with the Archimate enterprise architecture modeling language, research for tools and methods for analyzing Archimate models or translation them to other suitable for analyzing formats (for instance to a general ontology model). As a validation, they should build a prototype that takes an Archimate model as input, runs several validation checks, and provides a query mechanism for future analysis.

Requirements for the participant:

  • Knowledge of system/enterprise modeling and Archimate specification in particular
  • Knowledge of ontology languages and frameworks
  • Programming skills

Rostelecom IT 2. Requirements Quality Metrics

Context: There are a variety of requirements management technics, tools, and practices; however, they should be tailored to the development methods the team choose. Thus the questions arise: how to measure the quality of requirements that the team gets? How to distinguish flaws in the work of requirement engineers from the wrong instrumental and notational choice?

Goal: Review of methods and metrics that were introduced in requirements engineering (both by practitioners and researches) and propose a requirements quality model that is based on metrics which can be gathered during the development process and from the structured team feedback. The model should be suitable for requirements quality assessment.

Difficulty level: Intermediate - High

Requirements for the participant:

  • Knowledge of software development process and its metrics
  • Knowledge of requirements engineering
  • Excellent researching skills
  • Experience in building parametric models

MIPS BU (Wave Computing). Tools for optimal placement and routing in Hardware Design

Context: The optimal placement and routing (connecting placed cells by wires) of hardware components is the cornerstone of the design that determines the efficiency of developed components. The tools by Synopsys and Cadence have the monopoly position in the development teams in Intel, Apple and NVidia. The tool licenses costs can easily rich 100x K$ per node. New tools in this domain can disrupt the monopoly and create a great market potential for a star-up.

Goal: The team will study a subject on the placement and routing problems and will apply optimization algorithms. The ideal solution will be a prototype that:

  1. Parses Verilog netlist files (examples will be provided) in to a graph of standard cells and connections.
  2. Applies optimal placement and connections algorithm. The optimal placement should use the smallest number of layers of connectivity, as well as minimal overall length of the connections, with some restrictions on the longest path.
  3. Displays the intermediate graph and the final result.

Difficulty level: Intermediate - High

Requirements for the participant:

  • Knowledge of programming languages (Java, Python, ...)
  • Ability to write text parsers
  • Knowledge of graphs and algorithms
  • Knowledge of algorithmic complexity, experience in optimizing for speed and memory usage.

More information on the challenge will be provided at the elicitation period.

Relevant information: lecture by Yuri Panchul

Edge Vision. Continuous Integration of hardware dependent product

Context: Edge Vision company develops custom application for computer vision problems on autonomous single board computers such as Raspberry Pi. For example, a typical application is high-ways traffic analysis with cameras. The development process at Edge Vision requires the thorough testing on different scenarios that executed in several subsequent stages in different environments - Cloud and hardware. The problem is the testing phase requires more that 10 hours, which leads to delays in integration of new features. New features are developed in parallel with testing of the previous release. The integration brings dependencies and problems.

Goal: The team will

  1. Study the current flow of Continuous Integration.
  2. Analyse the the flow with the goal to alleviate the problem by reducing the dependencies or overall time of testing.
  3. Present recommendations to optimize

Difficulty level: Intermediate - High

Requirements for the participant:

  • The tools that are currently in use are Docker, Python, Linux, GitLab / CI.
  • Good knowledge of internal structure of Git (fast forwarding, rebase, ...). See GitBook.

More information on the challenge will be provided at the elicitation period.

Acronis 1. Reverse Engineering and PE analysis tools comparison

Context:

Disclaimer: reverse engineering is legal Russia with certain limitations. Check article 1280 of the Civil Code of the Russian Federation for more information.

Acronis as a cyber-protection company is interested in malware analysis and deep understanding of how everything works. Many tools and frameworks allow disassembly and analysis of different executables (.exe, .sys, .efi). Historically developers in Acronis use IDA Pro as the main tool for analysis. Recently National Security Agency made their tool "Ghidra" open source. That means that it may become more developer-friendly than IDA. We are interested in the search for a more effective approach for reverse engineering.

Goal: The company would like to conduct a comparison analysis of 2 reverse engineering tools.

Example of comparison attributes:

  • Open source or Proprietary. As a result, impact of support from developers and community.
  • Entry threshold, e.g. usability
  • Scripting capabilities...

Base on the knowledge obtained through the tutorial and reading, the team will experiment with the tools and will deliver a presentation on usability analysis of the tools. Mentors will advise on the current legal limitations.

We expect a team of students brave enough to try to work in different disassembly tools. As a training material, Acronis will provide short lecture. At the Hackathon, there will be a task to analyze an executable.

Tools:

  • IDA Pro 6.8 or later
  • Ghidra (https://github.com/NationalSecurityAgency/ghidra)
  • HIEW

Difficulty level: Intermediate - High

Requirements for the participant:

  • Knowledge about mainly C. Python is nice to have.
  • Basic knowledge of assembly language would be nice
  • Basic understanding of Portable Executable format would be nice

Interesting reading that may help if you are complete newbie:

Specific lecture will be provided on October 18 for all interested participants.

Acronis 2. Cyber platform API usability

Context:

Founded in Singapore in 2003 and incorporated in Switzerland in 2008, Acronis now has more than 1,300 employees in 18 countries. Its solutions are trusted by more than 5 million consumers and 500,000 businesses, including 80% of the Fortune 1000 companies. Acronis’ products are available through 50,000 partners and service providers in over 150 countries in more than 30 languages.

Acronis sets the standard for cyber protection through its innovative backup, anti-ransomware, disaster recovery, storage, and enterprise file sync and share solutions. Enhanced by its award-winning AI-based active protection technology, blockchain-based data authentication and unique hybrid-cloud architecture, Acronis protects all data in any environment – including physical, virtual, cloud, mobile workloads and applications – all at a low and predictable cost.

Recently Acronis released Acronis Cyber Platform that platform allows customers to integrate and extend their applications and services with cyber protection solutions.

Goal:

The team will play a role of a customer, who develops a specific application, which requires a backup and cybersecurity solution. Your goal is to analyze the usability of the API of the Acronis Cyber Platform through prototyping activities. The ideal result would be a report on the suitability of the API for small development teams.

At the Hackathon, with a team of architects you will experiment with the API and will create a presentation on the usability analysis.

Difficulty level: Beginner to intermediate

Requirements for the participant:

  • Knowledge of any programming language
  • Dev-ops experience will be nice
  • Experience with cloud applications will be nice

Interesting reading that may help if you are complete newbie:

  • Link to platform documentation will be soon

MELODIC 1. Model and deploy a cross or a multicloud data-intensive or typical busieness application using the MELODIC platform.

Goal: To model data-intensive or typical business application (to be chosen by participant) in CAMEL and deploy using the MELODIC platform in cross or multicloud way. To present MELODIC capabilities for dynamic sizing the Spark cluster based on the current resource usage and application requirements.

Difficulty level: beginner to intermediate

Description: Within the challenge the participants should learn CAMEL language and use MELODIC platform. As a validation, they should model their own (or provided) application (big data or typical business application) and deploy it using the MELODIC platform. The application should be able to run in the cross cloud or multicloud way.

Requirements for the participant:

  • Knowledge about cloud computing.
  • Knowledge about modeling languages.
  • Own or provided data-intensive or business application (eg. Spark examples).

Related tutorial: Good Bye Vendor Lock-in: Getting your Cloud Applications Multi-Cloud Ready!

MELODIC 2. Design and extend architecture of the MELODIC platform with new solver

Goal: To extend the MELODIC platform architecture with new solver.

Difficulty level: High

Description: The challenge is to design how can you extend Melodic with new solvers. In the challenge the selection and recommendation of some solvers should be included. It should be explained why do you think this is a good alternative to the implemented solvers. We assume usage of already existing solver, based on available libraries. The key element of the challenge will be to understand the MELODIC platform architecture and API and to properly customise the library to be used with the Melodic platform. We assume some short-cuts in the implementation, like the usage of hardcoded utility function instead of integration with Utility Generator. If the challenge will be successful then the next step would be to fix some short cuts.

Requirements for the participant:

  • Knowledge about the REST, Docker containers, Java.
  • Knowledge about the optimization solvers.
  • Very good programming skills.
  • Experience in design and development of complex systems integrated with ESB.

Related tutorial: Good Bye Vendor Lock-in: Getting your Cloud Applications Multi-Cloud Ready!

MELODIC 3. Enhancing Melodic platform with prediction capabilities

Goal: Melodic platform uses current values of metrics from application (CPU, Memory, business defined metrics, etc) to adapt the deployment of the application. The goal of the challenge is to add prediction capabilities using chosen method of time series forecasting (Exponential Smoothing, ARMA/ARIMA, logistic regression, linear regression, tree based methods (Random Forest, XGboost, other), neural networks) to allow adaptation based on predicted value of time series instead of current one.

Difficulty level: Intermediate - High

Description: Within the challenge it would be needed to integrate with Melodic messaging system to gather metrics value and making forecast based on the gathered values. Forecast value should be sent as a new type of metrics to given queue and adaptation should include the predicted value. It requires to understand the Melodic messaging subsystem, to implement one forecasting method (could be simple) and to send metric to jms queue.

Requirements for the participant:

  • Knowledge about the REST, Docker containers, Java.
  • Knowledge about the time series forecasting methods.
  • Very good programming skills.
  • Experience in design and development of complex systems integrated with ESB.

Related tutorial: Good Bye Vendor Lock-in: Getting your Cloud Applications Multi-Cloud Ready!

Q-Rapids. Measuring Quality Requirements in Software Projects

Goal: Q-Rapids is a collaborative research project that offers a platform for specifying and measuring quality requirements. The goal of this challenge to experiment with the Q-Rapids tools on an industry example. The team is required to build a metrication system that would help an industry stakeholder to monitor and manage a software development process.

Difficulty level: Intermediate

Description: Configuration of the Q-Rapids Tool to define, monitor, and visualize a concrete indicator for measuring some quality related characteristics for a software product. To configure the new indicator, the developer needs to:

  1. develop the component to read the data from the data source tool (e.g. reading issues from Jira),
  2. configure some metrics using the collected data (e.g. number of open bugs),
  3. define some quality-related factor using the configured metrics (e.g. features throughput),
  4. defining the quality-related indicator using the configured quality factors (e.g. development performance).

Then, use the Q-Rapids Tool to evaluate and visualize the assessment of the indicator.

Requirements for the participant:

  • Knowledge about ...