Search
Blockchain Challenges And Triumphs
Starting with blockchain can be difficult because there are a lot of frameworks/clients for the different cryptocurrencies, and misleading information in various posts I read when I was trying to create my private chain. For example, the commands being used were outdated. Getting to know Blockchain and its benefits is very useful, mastering the topic and languages is challenging, but who doesn’t like a good challenge?
{{ youtube Iger0FgnT7w }}
When I first started to look at Blockchain I got hooked. The fact that it’s incorruptible, transparent and decentralized got me thinking of the possibilities. I jumped in to learn about cryptocurrencies and I focused on Ethereum. I was really curious about Smart Contracts, and how they self execute when a given condition is met. There are many frameworks such as Mist, Meta mask, and Embark. There are clients for Ethereum written in Go, Rust, C++, Python, JavaScript, Java and Ruby with Geth(Go) and Parity(Rust) being the leading ones. I tried some of the frameworks but as a developer I must understand how everything works, the consensus, the mining and such, so I opted to use Geth (Go-Ethereum). With Geth you can connect to the live network, join the test-nets or create your private network. I figured I should create my own network to start mining, testing contracts, making scripts and such. I looked online for tutorials and I had very little success, most posts were outdated so I started to look in the source code of Geth. I read the Yellow paper for Ethereum which was hard. It contains a lot of raw information which took me a little while to comprehend. I also read a book on Solidity and its documentation to write Smart Contracts. Solidity isn’t hard to write once you understand how it works and its limitations (of course there are workarounds). I read the White Paper for Dapps and I looked in the source code of Geth to figure out how to create my private chain.
Here’s one of the reasons why knowing the source code is useful. When I started my private chains I couldn’t control the Difficulty, which is the scalar value corresponding to the difficulty level applied when finding the nonce of a block. The higher the difficulty, the more calculations a miner must perform to find a valid block, making the process of testing Smart Contracts and Dapps slow. So I changed a value in the consensus in Geth freezing the difficulty value to 1. In a custom chain without this change, as the blockchain grows so will the difficulty, for example, if in block 0 the difficulty is 1, by block 500 the difficulty will be 166784 and the total Difficulty 74106029 (a value representing the total difficulty applied up to the last block). Now when I made the change to freeze the difficulty in block 0 the difficulty was 1 and by block 500 the difficulty was still 1 while the total Difficulty was 501 making the testing of Smart Contracts and Dapps much faster.
Geth offers a JS console to interact with the blockchain to start mining, creating accounts, and so on. Therefore I was wondering if I could make scripts in order to create, for example, an explorer or get the full history on the chain or its transactions. I created a script checking for every block in the blockchain if it had transactions, and showing the details of each transaction, or showing only transactions made to/from an account or a contract. Also I created a script to mine only when there was a transaction, to create and set a new coinbase and to create a register of user information such as Name and E-mail in a Smart Contract.
With these scripts I was curious to run a scalability test. I created a contract which saved user information, Name, Email, Permission. The idea of the permission was to give a user only access to specific parts of the Dapp. The goal was to make a cluster of 15 nodes and get them all to register users with different information and compete with each other to find the next available block. The process took 2:30 hours and there were over 45,000 users, 0.5 users per second.
Then I moved on to create a Smart Contract which could save documents, and keep track of who saves which documents in the blockchain. Due to Solidity limitations I created two Smart Contracts, a Storage contract and a Record contract. The Storage contract only needed to be deployed once since it saved any arbitrary amount of data with a name acting as a key for later data retrieval. It saved data identifiers to avoid overwriting, and retrieved the Record contract address through the user account address.
The Record contract was created each time a user was registered and the address for each contract was saved in the Storage contract, which saved each document name saved by the user. Due to solidity limitations, you cannot retrieve an object with a Solidity function. You can create a fixed object but it limits the scalability. The solution I used is that for each file saved in the Storage contract, the name of said file was saved in the user Record Contract. This approach was scalable and transparent as we could see who saved which file. These contracts are showcased in the Dapp Blockchain4LinkedData.
As example document, Smart Contracts have saved RDF documents (Linked Data) in the blockchain.
{{ figure src="/img/Blockchain4LinkedData_Workflow.png" title="Linear workflow of De-centralized app" }}
This diagram represents how the second contract (Storage Contract) works. From a cluster of nodes, the Dapp call the contract functions from the blockchain and returns to the user a dialog box to specify the document to be saved in the blockchain. Once the user specifies a document to be added to the blockchain, the document is given a unique identifier and the Smart Contract is executed resulting in a transaction saving the document in the chain. The document can be retrieved through its identifier, such as “example.rdf”.
Mastering the topic, I can understand why blockchain is on the rise. The possibilities are endless. I truly think this technology is going to change the world.
Koneksys attending Graph Day SF 2017
{{ figure src="/img/IMG_4617-1080x675.jpg" title="The Koneksys Graph Day Team" }}
Welcome to the Koneksys Blog!
For our first post, we are announcing the presence of Koneksys at Graph Day 2017 in San Francisco. There you will be able to meet our CEO Axel Reichwein, Senior Developer Victor de Gyves, and Researcher Dennis Grinwald. If you are there, please do come and say hello. We want to talk to as many people as possible to find out all the great things happening in the graph world, and we’d be happy to tell you what we do at Koneksys. If you can’t make it, keep an eye on our Twitter profile during the day where we will be posting updates.
Our purpose at Graph Day is to discover the latest in graph database technology. This is an evolving space, with a number of different vendor solutions out there pushing the limits and innovating. As Koneksys brings advanced Web-powered Linked Data solutions to our clients, we strive to use the latest and greatest technologies to assist. We do this by testing and evaluating, and by learning at events such as Graph Day. We look forward to seeing you there!
INCOSE International Workshop 2018
{{ figure src="/img/blog/incose-podium.png" class="smaller" }}
I attended the INCOSE International Workshop 2018 to present Open Services for Lifecycle Collaboration (OSLC) and to engage with leaders in the systems engineering community. I had the opportunity to present OSLC on 3 different days in these 3 different sessions:
Tool Integration and Model Management Working Group
Model-Based Systems Engineering (MBSE) Initiative & Systems Engineering (SE) Transformation
Systems Modeling and Simulation Working Group
I was very glad to see strong interest in the systems engineering community for OSLC, a key technology to achieve the Digital thread.
{{ figure src="/img/blog/digital-thread.png" class="" }}
My main OSLC presentation was given to the MBSE Initiative & SE Transformation Working Group. It can be found on slideshare, as well as on OMG MBSE wiki containing references to other, mostly downloadable, presentations from that working group.
My OSLC presentation was intended for a large audience with possibly little technical background. The intent was to explain key OSLC concepts by comparing them to familiar World Wide Web concepts. I was told several times that this kind of introduction to OSLC is very understandable. It is similar to the OSLC presentation that I gave for the recent Jama webinar which can be viewed by signing up.
Among the other presentations was an overview of the Systems Modeling Language (SysML) v2 which included a slide on the importance of integrating systems engineering information with other domains.
{{ figure src="/img/blog/sysml-integration.jpg" class="" }}
I was excited to learn about a new “Semantic Technologies for Systems Engineering” (ST4SE) initiative led by NASA JPL. More about this initiative can be found in this project description. An important industry representative shared with me his interesting idea to combine the ST4SE ontologies with the OSLC Configuration Management specification for RDF version-management at a global level. We would be very excited to participate in such an effort.
In general, the relatively new OSLC Configuration Management specification is drawing a lot of attention. I heard a lot of interest to link ALM and PLM information using a common global configuration management approach. I was told at the INCOSE IW that the current OSLC Configuration Management specification concepts would not easily map one-to-one to PLM configuration management concepts. As this topic needs to be addressed successfully in order to allow a broader adoption of OSLC, covering important domains such as PLM, my team at Koneksys will investigate this question very shortly.
Stay tuned!
Axel
Linked Data Takes Care of Your Health
I survived HL7 v3. Don’t go to version 3, go to HL7 FHIR!
I was the manager of an HL7 project in Mexico. We developed HL7 v3 adapters for Electronic Health Records (EHR) during which we received continuous feedback on our solutions from the authors of the HL7 v3 implementation guide for Mexico.
{{ figure src="/img/blog/standards.png" class="small" title="Image credits: www.xkcd.com/927/" }}
Here are the problems my team and I stumbled on with HL7 v3:
It’s not sexy. Try to attract talented programmers to work with XML! It’s difficult! Talented developers want to work with JSON or YAML. You cannot change that. Just accept it.
It requires a lot of time to read about the HL7 v3 concepts. There are so many documents on HL7. To get the concepts of clinical history first you have to read the introduction, two sections: primary and secondary sets of standards then browse over 37 documents of standards until you understand the Clinical Document Architecture 2.0. And finally the specific implementation guide.
HL7 v3 specifications are designed on purpose to be high-level, expecting country-specific extensions for country-specific implementations. So, there is no assurance of interoperability between international systems and services. Furthermore, a lot of nations simply don’t have the resources and expertise to invest in elaborating HL7 v3 implementation guides.
Assuming an implementation guide exists, it is too abstract for developers. It has a lot of information about the -already very complex- content and format on XML. But it is of little help for the developer on topics about the communication channels, protocols, scale-out related problems. I won’t rant against a specific implementation guide. But just google: “HL7 v3 Implementation guide” for a random country. You’ll find that for most of it, it’s just the same class diagrams which are already found on the HL7 site documentation, including a table of fields-descriptions and a big copy-pasted XML document.
Static hierarchical data structures as in HL7 v3 do not scale. Object identifiers (OIDs) are allocated in a hierarchical manner, so that, for instance, the object with OID ‘1.2.3’ is the only object which can be connected to object with OID "1.2.3.4". However, relationships between organizations, patients, even authorities are not hierarchical. Wake up to social networks please! This is the 21th century! Relationships are best described by graphs instead of hierarchies!
{{ figure src="/img/blog/duty_calls.png" class="smaller" title="Image credits: www.xkcd.com/386/" }}
What about an architecture which uses developer-friendly technology? And what about if that technology supported widely adopted standards, was decades-proven, and was designed for scale-out by providing universal identifiers ready for ad-hoc links between data?
Enter the new HL7 FHIR. It is based on Linked Data. It gives you out-of the box:
Support for W3C Web standards, such as HTTP, REST, JSON-LD and OAuth.
Based on the proven scalable Web architecture
Supported by a large community of web developers
Based on open developer-friendly documentation
Having a more complete scope than HL7 v3 by for example covering concepts related to communication protocols such as RESTful APIs
It is interesting to note that not everything is yet specified to the last detail in HL7 FHIR. For example, FHIR does not specify a REST API standard to be used, such as smartAPI or Open Services for Lifecycle Collaboration (OSLC). These Hypermedia REST API standards allow clients to discover resources and services of a healthcare system without having to lookup human-readable documentation. Instead, client applications can use the standardized API to discover on their own available resources and services by simply understanding the metadata returned by the API.
I intend to explain in more detail benefits of HL7 FHIR for automation in a next blog article.
Have a nice day Compadres!
Make data connected again!
Living in a highly connected world ~ a large amount of related data is created on a daily basis. Although these relationships allow us to have valuable insights into how various data points (actions, people etc.) are associated with each other, affect each other, or may influence each other in the future, they often get lost once the data is stored in common relational tables or other general NoSQL data stores.
{{ figure src="/img/connected-web-data.png" title="Fig. 1: Illustration of the connected web of data" }}
The graph data model (like shown in Figure 2) faces the problem of “isolated data points” by addressing relationships as first class citizens. As an example, a user’s post/tweet and its additional information (hashtags, links, sources etc.) can be expressed as a graph and analyzed to understand how the post influenced the user’s follower base.
This is just a simple example of showing that graphs are very well suited for finding patterns and anomalies among large and complex data volumes.
{{ figure class="small" src="/img/social-media-graph.png" title="Fig. 2: Social media graph" }}
When first starting off working with graphs and graph databases, one notices that the graph data model and its corresponding query language differ from one data store to another. Often graphs are expressed as property graphs (Figure 3).
{{ figure class="small" src="/img/property-graph-model.jpeg" title="Fig. 3: Property graph model" }}
The property graph consists of triples, which are two nodes that are connected with each other through a directed edge. Each node has its own unique ID, and may have several properties that expose more detailed information about it (e.g. name=’’vadas’’). Each edge expresses the relationship between two nodes, has an unique ID, and can also have multiple properties. The data model of a graph can vary from one company to another companies context. So does the underlying graph storage engine, varying from key-value- to document-oriented-, native graph data stores & many more. Some of them keep the graph data in-memory (for fast operations before storing it on-disk), while others store the data on a secondary storage level right away, making use of smart storage data structures like LSM- or B+ trees that provide an indexed access to the data which reduces the number of disk accesses (slow access) when operations are performed on the data.
BUT, it doesn’t take one a lot of time to encounter the weakpoint of this variety of graph models & databases: Data Integration. Different companies often use their own derivation of a property graph model (different datatypes, structure of properties, IDs etc.), making it almost impossible to integrate the data from one company’s context to another.
To tackle this problem: Tim-Berners Lee, the inventor of the World Wide Web, introduced his vision of his so called Next Web, or Linked Open Data (good introduction video). The Next Web’s underlying technology is considered to be available as Open Source and to be based on 4 rules, that Mr. Berners-Lee proposed:
Use URIs as names for things
Use HTTP URIs so that people can look up those names.
When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)
Include links to other URIs, so that they can discover more things.
RDF or Resource Description Framework provides a graph model (Figure 4) that consists of triples (2 nodes), the subject + object node and an edge or predicate that represents a directed connection from subject to object. So far nothing new compared to the property graph model, but the power of RDF’s graph model is that each element(subject, predicate, object) is represented solely by one URI(each object may be represented by values of different data types aswell). In addition to this RDF’s full potential blossoms out when the graph is build using common vocabularies such as rdfs, a W3C-recommended vocabulary (for more info, see RDFS). It’s this uniqueness and detailed structure that make RDF so powerful, enabling easy data integration across multiple collaborating units.
The language to query an RDF-graph is called SPARQL. SPARQL is a declarative query language (user decides beforehand what data to retrieve) that expresses graph pattern queries across diverse data sources, whether the data is stored natively as RDF or viewed as RDF via middleware.
Many industries like the genomic engineering industry adapted RDF, because they deal with large, complex, highly connected datasets + strongly rely on a consistent “uniform” of the graph data representation that is used by many different genomic companies and labs.
{{ figure class="small" src="/img/rdf-graph-model.gif" title="Fig. 4: RDF Graph Model" }}
As applications become more complex in terms of their size (Big Data), comprising data from a variety of devices, platforms, sensors etc., the number of industries that adopt RDF and contribute to advance it’s technology is steady growing.
Although there are many different graph databases out there (popular one: Neo4j) most of them store their data expressed in the property graph model. Only a few support RDF and those who do don’t scale-out (Big Data requirement) very well.
As in Intern at Koneksys, located in San Francisco,CA, I created a tool that compiles an RDF graph to a Apache Spark GraphFrame and runs SPARQL queries on it.
GraphFrames’ ability to run graph queries and graph Algorithms(e.g. PageRank) in-memory (fast access) & to fetch data from a variety of sources make it a great tool for RDF applications that work with large datasets. In addition to this GraphFrames’ internal query language has a common denominator with SPARQL which is it’s capability for querying graph patterns.
It’s an Apache Spark compatible package that was introduced by the AMPLab at UC Berkeley and is actively being developed (for more info, see GraphFrames). Due to it’s promising technology it is considered to be added to Apache Spark’s ecosystem soon, which will make it part of a large and active user-community. A GraphFrame is based on two DataFrames, the data model of Spark SQL, one for the nodes (internal name: vertices) and one for the edges. Querying a GraphFrame will output another DataFrame that itself can be used to create a new GraphFrame. A DataFrame and therefore GraphFrame can be created using a large variety of data sources like Apache Cassandra, HBase, Neo4j, HDFS and many more. Basically every database can be used as an input source if a fitting Connector is created beforehand.
This tool is currently being tested in a cluster (HDFS-cluster) and may also be tested in a single node setup, using various dataset sizes corresponding to the rules defined by The Berlin SPARQL Benchmark (BSBM) that compares the performance of RDF applications. Feel free to test the tool and read the documentation about it’s architecture on GitHub: SPARQL2GF.
PLM Roadmap and PLM Europe 2017
Last week Koneksys was present at PDT Europe 2017 in Gothenburg in Sweden. The purpose was for me to talk about the Data Web, the OSLC Community, and of course to learn the latest industry trends from PLM experts. The two day event, organized by Eurostep, was preceded by the one day PLM Roadmap organized by CIMData. As summed up on the PLM Roadmap page, it was “3 days of presentations, 2 unique events, 1 ticket”!
The theme for the first day, PLM Roadmap, was ‘Digitalization’. The Gartner definition is “the use of digital technologies to change a business model and provide new revenue and value- producing opportunities; it is the process of moving to a digital business”. This trend has been powering industries forward for years, but right now in the PLM industry it is becoming a driving force of disruption as it becomes clear that tools, processes, and ways of thinking need to change. So in this spirit over the three days as a whole, there was discussions about the Digital Twin, Digital Thread, Industry 4.0, and more.
As the first day proceeded, and indeed throughout the PDT event over the next couple of days, it became clear that there were common issues that everyone was trying to solve, whether you are a PLM vendor, systems integrator, or a company trying to manufacture in the most efficient way. For example on standards, while deemed to be imperfect and splintered, there was general consensus that they were the best way to future-proof solutions and increase collaboration. On the topic of the digital thread, many of the presenters were working to solve the issue of maintaining the flow of data through systems and making sure it is linked. This is where we believe the Data Web is key for robust PLM data integration.
{{ figure src="/img/blog/pdteurope.jpg" class="small" title="Brian King speaking about the Data Web" }}
As part of the ‘Standards for MBSE interoperability and openness’ track, I presented on the Data Web and in particular on Web Standards, OSLC, and specifications for Web-based data interoperability. Follow through this link for the slides.
Data connectivity is at the core of PLM solutions in order to achieve traceability, consistency
and integration across product lifecyle data. The case I was making, and one that Koneksys firmly believe in, is that the Web has demonstrated connectivity at the largest scale. As the Web is now increasingly becoming a Web of data, in addition to a Web of documents, it may transform data management solutions such as PLM solutions which were until now unaffected by the disruption caused by the Web.
For more information on what happened, you can explore the #plmroadmap and #pdte17 tags on Twitter, and read the following posts by other attendees:
The weekend after PDT Europe 2017 (part 1) by Jos Voskuil
CIMdata PLM Roadmap: Digitalization trend and old PLM problems by Oleg Shilovitsky
PDT Europe 2017: The culmination of complexity in PLM and next steps by Oleg Shilovitsky
Presentation of OSLC at Purdue PLM Meeting 2018
I attended for the first time the Purdue PLM meeting and it was full of interesting content! Approximately 60 persons attended the conference focused this year on PLM-ALM integration. The event was organized by Prof. Nathan Hartman who was key in helping create a friendly atmosphere encouraging dialog among participants. Presentations and video recordings can be found on the PLM meeting web page.
I had the opportunity to present Open Services for Lifecycle Collaboration (OSLC) and explain how it could serve as the backbone to achieve PLM-ALM integration using open Web standards. Presentation and video recording are available on slideshare and vimeo. Below is an image of the presentation showing the collaboration challenges in designing complex systems.
{{ figure src="/img/blog/PLM-presentation-main.png" class="" }}
I was glad to learn that PLM-ALM integration using OSLC is already taking place between Polarion and Siemens Teamcenter. This is however only possible if the Linked Data Framework (LDF) of Siemens Teamcenter is activated. I would have loved to find more information about LDF and about the capabilities of this integration but I couldn’t find any publicly available documentation. Please let me know if you are aware of such documentation.
General Motors showed exciting presentations on their MBSE and PLM efforts, which were in part based on OSLC. Talking about vehicles, I would be very curious to learn more about manufacturers of autonomous vehicles and how they are tackling the critical challenge of safety. Due to the increased complexity of autonomous vehicles, I would expect manufacturers to do a lot of computer-based simulation in combination with systems engineering and digital twin activities. All these activities require collaboration and data connectivity in order to be performed efficiently. For example, traceability from requirements to test cases, to simulation models, and simulation inputs would need to be established in order to achieve a big picture overview and have a basis for collaboration amongst systems, software and hardware engineers. Maybe the next blog post will address some thoughts on this in more detail.
In the meantime, please find below images of exciting presentations of GM at the PLM meeting. You can notice that the slides refer to OSLC, Linked Data, systems engineering, PLM/ALM integration, traceability, change management, and include many examples of data connectivity across data from different engineering disciplines. I would like to emphasize that automatic generation of architecture analysis and reports is shown as not being driven from a single source model, as it is often described in the model-driven architecture literature, but from multiple distributed models in combination with their relationships. strongYou could call this graph-driven architecture! Model-driven is so last-year :)/strong Obviously, this only works if data from different sources is exposed according to a common open API, and if the data is exposed using global identifiers, as enabled by OSLC.
{{ figure src="/img/blog/PLM-presentation-slide-1.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-2.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-3.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-4.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-5.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-6.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-7.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-8.png" class="" }}
There was also a very exciting presentation on analytics in the context of PLM. Below are some snapshots of that presentation.
{{ figure src="/img/blog/PLM-presentation-slide-9.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-10.png" class="" }}
{{ figure src="/img/blog/PLM-presentation-slide-11.png" class="" }}
Presentation to Modelica/FMI Community on Next Web for Connecting Engineering Data
I had already presented the Next Web at software and systems engineering conferences and this time I talked about the Next Web at the North America Modelica Users Group (NAMUG) 2017 conference located in Stanford. The Modelica/FMI community is focused on performing simulation with open standards. It attracts amongst others engineers working on motorsports, nuclear power plants, and building simulation.
I chose to refer to the “Next Web” in the title. By Next Web, I meant a data integration approach based on Linked Data conforming to the Open Services for Lifecycle Collaboration (OSLC) initiative. The only problem is that Linked Data and OSLC don’t ring any bells in people’s minds. Even though Linked Data and OSLC are based on the World Wide Web, both Linked Data and OSLC don’t explicitly refer to the Web and are typically wrongly perceived. Linked Data sounds to much like Big Data, and OSLC sounds too much like SOA. Having noticed that the inventor of the Web Tim Berners Lee had labeled his TED talk on Linked Data “The next Web”, I thought that it would also make sense to reuse this catchy title.
Having worked with Modelica, FMI, ordinary-differential equations, numeric solvers, and optimizers in my past as an aerospace engineer, I knew that advanced simulation activities typically require collaboration between engineers of different engineering disciplines. The goal of my talk was to raise awareness in the simulation community about the Next Web as infrastructure to connect engineering data. I also wanted the Modelica/FMI community to understand what impact the Next Web would have on existing standards, and on data interoperability. I only had 20mins for my talk so I had no time for stories nor anecdotes, but that suited me since I’m generally not a good story teller.
After my talk, I learned about an interesting international project called Brick in which building knowledge is saved as Linked Data. The project members include IBM Research and several academic universities including Berkeley. I also found the presentation of Michael Tiller on web-based user interfaces to complex simulation models fascinating and very much aligned with the vision of the Next Web. Examples of such web-based user interfaces can be found at http://modelica.university/res/apps.
I will follow-up with the new connections I have made in the Modelica/FMI community and I thank the NAMUG conference organizers for accepting my talk. Here is the link to my presentation.
What Is The Data Web
{{ figure src="/img/New-Generation.svg" title="A new generation of data management solutions" }}
Data is generally saved in many different databases. Data analysis is more valuable if applied to all data, not just data from a single source. In addition, data analysis can generate more interesting results if it takes into account logical links existing between data from various sources. For example, data from a petroleum deposit stored in database A can be linked to predictions saved in database B and linked to real-time measurements of oil extraction saved in database C.
An analysis taking into account all data and all logical links between data is ideal. This is only possible by using standards for:
Identifying data at a global level
Making data accessible
Describing data and links between data from various sources
At Koneksys, we believe that adoption of the following 3 Web standards supports these goals and helps link data better:
URL for identifying data at a global level (as for web pages in HTML)
REST API (HTTP) compliant with a Hypermedia standard (e.g OSLC or smartAPI) for making data accessible (improved version of the REST APIs that are currently commonly used)
RDF for describing data and links between data of various origins (the equivalent of HTML but for data)
URL, HTTP, and RDF are Web standards defined by the W3C organization. Basically, we apply the architecture of the Web, as it currently exists to connect web pages, to connect data on a private enterprise Web (beware, this is not about a public web, although of course it can be applied there as well). We call this approach the "Data Web". This approach provides a global view of data and its links which may easily be updated anytime and in turn provide updated analysis results.
Traditional data management applications are restricted to using data from a specific source. In contrast, Web search engines, such as Google’s search service, consider the Web as its source of documents instead of a specific database. The Web can therefore be viewed as a source for documents and data. We believe that future data management applications, such as for data analysis and visualization, will use data on the private enterprise Web as a source, in other words the Data Web as a "new global database".
Koneksys offers solutions to integrate data according to the Data Web. We have developed all components of the Data Web architecture:
Servers exposing data according to standardized REST APIs
Clients using these REST APIs
Data management solutions taking as input data from the private enterprise Web instead of a specific database, to further process it for example with specific solutions such as Spark or Elasticsearch
More details can be found at the slide deck, 'Data Integration Solutions Created By Koneksys'.
We believe that the Data Web will revolutionize the architecture of data management solutions.
Design Process Automation
{{ titleheadline title="Design Cockpit 43" }}
Dr. Axel Reichwein, founder of Koneksys, developed during his PhD research at the Faculty of Aerospace Engineering at the University of Stuttgart, Germany, a holistic product data model based on UML. This research has been incorporated by the German high-tech company a href="https://www.iils.de/" target="_blank"IILS mbH/a in a formal design language development framework, called "Design Cockpit 43" which supports design process automation.
figure class="margin"
img src="/img/information-architecture.jpg" alt="Information Architecture of Design Cockpit 43" /
figurecaptionInformation Architecture of Design Cockpit 43/figurecaption
/figure
Koneksys is proud to demonstrate to interested customers the principle of design languages and offer consulting and license management services for the “Design Cockpit 43”. Please contact us directly for terms and conditions. Further information on design languages can be downloaded in form of a white paper entitled "Total Engineering Automation" a href="https://www.iils.de/#downloads" target="_blank"here/a.
Finite Element Analysis
{{ titleheadline title="Research Project for NIST" }}
Supporting interoperability between Finite Element Analysis (FEA) tools
Developing formal yet simple neutral description of FEA models
Presented at NAFEMS World Congress 2017
figure class="small margin"
img src="/img/fea-1.png" alt="Research Project for NIST - Image 1" /
/figure
figure class="small margin"
img src="/img/fea-2.png" alt="Research Project for NIST - Image 2" /
/figure
Linked Data Research
{{ titleheadline title="SPARQL-to-GraphFrames" }}
Run SPARQL queries on large RDF graphs using the Apache Spark GraphFrames package
figure class="small margin"
img src="/img/SPARQLtoGraphFrames.svg" alt="SPARQL to GraphFrames" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/koneksys/SPARQLtoGraphFrames/"Open Source Solution Available!/a
/div
{{ titleheadline title="Git for Linked Data" }}
Run Git commands on RDF graphs for version management
figure class="smaller margin"
img src="/img/gitforlinked-data.png" alt="Git for Linked Data" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/koneksys/Git4RDF"Open Source Solution Available!/a
/div
{{ titleheadline title="Decentralized Applications for Smart Contracts" }}
More secure apps based on decentralized consensus
Deploying Decentralized Applications (dapps) on Ethereum, with smart contracts defined in Solidity
Saving Linked Data in blockchain
figure class="smaller margin"
img src="/img/blockchain.png" alt="Decentralized Applications for Smart Contracts" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/koneksys/Blockchain4LinkedData"Open Source Solution Available!/a
/div
Analysis
{{ titleheadline title="Project portfolio calculus over Big Data" }}
Deterministic Project Valuation by strongNet Present Value/strong and strongOption Valuation/strong
strongSensitivity analysis/strong of deterministic project portfolios
strongProbabilistic Valuation of project portfolios/strong with stronglatin-hypercubic/strong and strongbayesian/strong inference
strongProbabilistic Optimization/strong of project portfolios with genetic algorithms
strongExtrapolation/strong and strongautomatic regression analysis/strong of project portfolios
Application of strongSimplex methods/strong over strongbig data project portfolios/strong for optimization
Application of strongfiltering techniques/strong with stronggenetic algorithms/strong and strongcorrelation/strong to correct artifacts over historic strongbig data/strong
Blockchain Solutions
{{ titleheadline title="Example Smart Contracts Created by Koneksys" }}
Smart contracts for ICOs (Initial Coin Offerings)
Smart contracts to create ERC-20 Tokens
Custom smart contracts based on requirements
figure class="small margin"
img src="/img/services/blockchain_smart-contracts.png" alt="Blockchain Crowdsale" /
/figure
{{ titleheadline title="Initial Coin Offering (ICO)" }}
Various ICOs were developed:
For one or multiple beneficiaries with funds possibly split according to percentages
For allowing access to funds after a certain deadline date or token amount
With the option to update the beneficiaries addresses and percentages
figure class="small margin"
img src="/img/services/blockchain_crowdsale.png" alt="Blockchain Crowdsale" /
/figure
{{ titleheadline title="Blockchain Development Tools" }}
Ethereum
Solidity
Geth
Truffle
Mocha
JavaScript
figure class="margin"
img src="/img/services/blockchain_stack.png" alt="Blockchain Smart Contracts" /
/figure
Data Web Apps
{{ titleheadline title="Google-like Search" }}
figure
img src="/img/data-web-apps-1.png" alt="Data web servers" /
/figure
{{ titleheadline title="Link Editor" }}
figure
img src="/img/data-web-apps-2.png" alt="Data web servers" /
/figure
{{ titleheadline title="Tree/BOM Editor" }}
figure
img src="/img/data-web-apps-3.png" alt="Data web servers" /
/figure
{{ titleheadline title="Example Tree/BOM Editor" }}
figure class="smaller"
img src="/img/data-web-apps-5.png" alt="Data web servers" /
figcaptionChanging parent block with drag and drop/figcaption
/figure
Load default model example
Edit block names/ Add new blocks
Delete blocks with/without cascade mode
Change parent block with drag and drop Saving content in Apache Jena TDB triplestore
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/ld4mbse/linkeddata-tree-editor"Open Source Solution Available!/a
/div
{{ titleheadline title="Graph Editor" }}
figure
img src="/img/data-web-apps-4.png" alt="Data web servers" /
/figure
{{ titleheadline title="Data Web Platform" }}
figure class="small"
img src="/img/data-web-apps-6.png" alt="Data web servers" /
/figure
Full-text search
Type viewer
Resource Editor
SPARQL Client
RDF + OSLC Adapter Import
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/koneksys/KLD"Open Source Solution Available!/a
/div
Data Web Clients
{{ titleheadline title="Example Data Web Clients Created by Koneksys" }}
Clients of OSLC adapters (Simulink, MagicDraw SysML, AMESim, )
Bidirectional transformation client between Simulink and MagicDraw SysML
Bidirectional transformation client between IBM DNG and OpenMBEE
Synchronization clients between OSLC adapters and RDF stores
{{ titleheadline title="Transformation & Sync Client: Simulink ⇔ SysML" }}
figure class="small"
img src="/img/data-web-client-1.png" alt="Data web servers" /
/figure
Derive model from other model
Bidirectional model transformation between SysML and Simulink
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/ld4mbse/oslc-modeltransformation-simulink-magicdraw"Open Source Solution Available!/a
/div
{{ titleheadline title="Simulink ⇔ SysML Syncing Example" }}
figure class="small"
img src="/img/data-web-client-2.png" alt="Data web servers" /
figcaptionCorrespondence between Simulink Parameter and SysML Value Property/figcaption
/figure
{{ titleheadline title="Transformation Client: IBM DNG ⇔ SysML" }}
figure class="small"
img src="/img/data-web-client-3.png" alt="Data web servers" /
/figure
Sync requirements between tools
Bidirectional model transformation between IBM Rational DOORS Next Generation and a href="http://www.openmbee.org/"OpenMBEE/a
Synced requirement relationships between SysML, OpenMBEE and DNG
Data Web Servers
{{ titleheadline title="Example Data Web Servers Created by Koneksys" }}
Data Web Servers called OSLC adapters conforming to Open Services for Lifecycle Collaboration (OSLC) standards.
figure class="small"
img src="/img/data-web-servers.svg" alt="Data web servers" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/ld4mbse"Open Source Solution Available!/a
/div
{{ titleheadline title="Mandatory Features" }}
Machine-readable representation of resources in RDF (e.g. JSON-LD, RDF/XML, Turtle, etc...)
Support for OSLC Core Specification (Self-describing services enabling service discovery)
{{ titleheadline title="Possible Optional Features" }}
HTML representation of resources
Capability to modify, create, delete resources
Support for ETags for avoiding conflicts when updating resources information about change events, such as deletion, creation, modification events (according to OSLC Tracked resource Set specification)
Provide delegated UIs to provide clients an HTML iframe displaying selection/creation/preview dialogs (according to OSLC Delegated UI spec)
Query capabilities similar to GraphQL (using OSLC selective properties according to OSLC Core spec) returning to clients for example a representation of a requirement including only a few selected attributes (and not all 100)
Graph query capabilities similar to SPARQL
Resource paging to divide large resource representations into smaller chunks which clients can retrieve one by one, similar to Google seach results
Support for syncing with Subversion repository
Hosting resource shapes and RDF vocabulary
Configuration options for syncing adapter with original data source (e.g. periodic, user-driven, etc..)
Configuration options for selecting data from original data source to be considered by adapter (e.g. specific data types, or specific data, or specific data containers)
Hosting resource shapes and RDF vocabulary
{{ titleheadline title="Choices" }}
Web application server (e.g. Tomcat, Jetty, and Wildfly (former JBoss), node.js)
REST API framework (e.g. Apache Wink, and RESTeasy and Jersey)
What data gets exposed through the data Web server?
How many links between data get exposed through the data Web server?
Data Web Specifications
{{ titleheadline title="Specifications for Web-based Data Exchange" }}
figure class="small margin"
img src="/img/data-specifications-1.png" alt="Specifications for Web-based Data Exchange Image" /
/figure
Oslc Specification = RDF Vocabulary + Constraints (Resource Shapes)
Both specification documents hosted by a data web server (not necessarily the same as an OSLC adapter)
{{ titleheadline title="New Specifications for Web-based Data Exchange" }}
figure class="margin"
img src="/img/data-specifications-2.svg" alt="New Specifications for Web-based Data Exchange Image" /
/figure
Creation of neutral vocabulary for MBSE
MBSE specification for system architecture covering concepts of SysML, Modelica, Simulink
New application-specific OSLC Specifications
{{ titleheadline title="New RDF Vocabulary for Dynamic System Models" }}
figure class="small margin"
img src="/img/data-specifications-3.svg" alt="New RDF Vocabulary for Dynamic System Models Image" /
/figure
Common Modeling Constructs: Models, Subsystems, Blocks, Ports Connections, Parameters, Variables
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/ld4mbse/ecore-mbse"Open Source Solution Available!/a
/div
{{ titleheadline title="Generation of RDF Vocabulary" }}
figure class="small margin"
img src="/img/data-specifications-4.png" alt="Generation of RDF Vocabulary Image" /
/figure
Generated RDF vocabulary and resource shapes from Ecore metamodel
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/ld4mbse/transformation-ecore2oslcspec"Open Source Solution Available!/a
/div
{{ titleheadline title="New OSLC Specification for SysML" }}
Vocabulary + Resource Shapes for SysML 1.3
Conversion of existing schema into RDF vocabulary
a href="http://www.omg.org/techprocess/experimental-rdf/SysML/1.3/"http://www.omg.org/techprocess/experimental-rdf/SysML/1.3//a
Work performed for the OSLC4MBSE WG in 2014 (a href="http://www.omgwiki.org/OMGSysML/doku.php?id=sysml-oslc:oslc4mbseworkinggroup"http://www.omgwiki.org/OMGSysML/doku.php?id=sysml-oslc:oslc4mbseworkinggroup/a)
div class="image--gallery"
div class="gallery-item"
figure
img src="/img/OSLC-Logo.png"/
/figure
/div
div class="gallery-item"
figure
img src="/img/OMGlogo.jpg"/
/figure
/div
/div
Databases
figure class="margin"
/figure
Experience to generate extended SQL languages and to translate it to any SQL-based DBMS
Experience to add qualitative calculus over SQL and PostgreSQL
Scale-out techniques on PostgreSQL using Write-Ahead Logging
Techniques to optimize store and retrieval of big data timeseries
PostgreSQL, MySQL, CockroachDB, MongoDB, Lucene,, ElasticSearch, Solr, OracleDB, SQL Server, generic ISAM/BerkeleyDB programming, Jena, JanusGraph
figure class="small margin"
img src="/img/Big_Data.jpg" alt="Big Data" /
/figure
Model-Based Systems Engineering
{{ titleheadline title="SysML-Modelica" }}
Former Chair of Object Management Group specification for SysML-Modelica transformation
Developed various SysML-Modelica transformation implementations
SysML (Eclipse XMI) - Modelica in QVT
SysML (MagicDraw) - Modelica in Java
SysML (Papyrus) - Modelica in Java
figure class="small margin"
img src="/img/mbse-1.png" alt="Data web servers" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/SysMLModelicaIntegration/"Open Source Solution Available!/a
/div
{{ titleheadline title="SysML Consulting" }}
System Architecture Modeling
Supporting Parametric Trade-Off Studies
Supporting Topological Trade-Off Studies
Visualizing and Automating Trade-Off Studies (executable activity diagrams)
Translation of a system architecture model into simulation models (CAD, FEA, multibody-system models, controller models, etc.)
Supporting SysML interoperability
{{ titleheadline title="OCL constraints in SysML Models " }}
First Order Predicate Logic (Quantifiers, Predicates, Functions, Constants, Variables)
Constraints defined in Object Constraint Language (OCL)
Applied OCL constraints against SysML models using Eclipse OCL
figure class="small margin"
img src="/img/mbse-2.png" alt="Data web servers" /
/figure
{{ titleheadline title="SysML Consulting" }}
Add graph query capability for SysML
Perform SPARQL Queries on SysML data
Use regular expressions
Perform mathematical operations (SUM, COUNT, etc..)
figure class="small margin"
img src="/img/mbse-3.png" alt="Data web servers" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/ld4mbse/magicdraw-plugin-sparql"Open Source Solution Available!/a
/div
{{ titleheadline title="Web-Based SysML Editor" }}
figure class="margin"
img src="/img/mbse-4.png" alt="Data web servers" /
/figure
div class="github-link" style="margin-top: 30px;"
div class="github-logo"
i class="fa fa-github"/i
spanGitHub/span
/div
a href="https://github.com/koneksys/web-based-sysml-block-diagram"Open Source Solution Available!/a
/div
Natural Language Processing
figure class="margin"
/figure
Real time analysis of corporate digital services (blogs, e-mail server, chat, facebook, etc) and to correlate it with concepts (Trump, Economy, Baseball…), e.g. Measure popularity trends of concepts on the company
Automatic text anonymizers, e.g. Automatically take a trial sentence and replace the private data which identifies with a string of asterisks.
Automatic text summarizers
Techniques to identify the main concepts of an article, document.
Network Security
figure class="margin"
/figure
Using and following the secure tools developed by the OpenBSD team like:
Http serving using strongsecure httpd/strong instead of insecure Nginx and Apache
Load balancing using strongsecure relayd/strong
strongLibreSSL/strong replacing OpenSSL
OpenSSH project
Secure strongOpenSMTPD/strong mail server
Secure PF tool to filter TCP/IP packets
figure class="small margin"
img src="/img/network-security.jpg" alt="Specifications for Web-based Data Exchange Image" /
/figure
OSLC Training
{{ titleheadline title="2-Day Workshop - Data integration using the next Web (Linked Data, OSLC)" }}
This course provides an introduction to Linked Data and Open Services for Lifecycle Collaboration (OSLC) for developing the future generation of data integration solutions based on the next Web. The course will cover both theoretical aspects as well as concrete techniques to implement OSLC solutions through multiple examples.
{{ titleheadline title="Course Goals" }}
Data integration scenarios and challenges
Web-based data integration
Open Services for Lifecycle Collaboration (OSLC)
OSLC Specifications
Standards for web-based data integration
Related technologies
OSLC adapter implementation
Apache Jena
OSLC Adapter tutorial
Conversion of data into RDF
Overview of existing open-source solutions
{{ titleheadline title="Who Should Attend" }}
This course benefits project managers, systems engineers, data management experts, standardization specialists, and software developers.
{{ titleheadline title="Instructor" }}
Dr. Axel Reichwein has developed since 2012 several OSLC-based integration solutions including OSLC adapters for MagicDraw SysML, Simulink, AMESim, and PTC Integrity, in addition to OSLC-based syn-chronization middleware, triple stores, SPARQL endpoints, and support for the OSLC Tracked Resource Set (TRS) protocol. Many of these solutions are available as open-source on GitHub at https://github.com/ld4mbse. He organized a 2-day Workshop on OSLC at the INCOSE International Workshop 2015. He co-authored an article on OSLC entitled Integration of MBSE Artifacts Using OSLC in the August 2015 issue of the INCOSE INSIGHT magazine. As Co-Chair of the OMG OSLC4MBSE working group, he investigated how OSLC can be used for data management in a broad engineering context. He received a PhD in Aerospace Engineering from the University of Stuttgart and performed postdoctoral research at the Georgia Institute of Technology in system architecture modeling and multidisciplinary data integration.
{{ titleheadline title="Workshop Outline" }}
Day 1
Part 1 - Data integration scenarios and challenges
Data integration
Data integration scenarios and challenges
Data integration vs. data interoperability
Potential future data integration solution features
Part 2 - Web-based data integration
Web history
Web standards
Difference between Linked Data and Semantic Web
Weird acronyms: REST, RESTful APIs, HATEOAS, Hypermedia APIs
URL vs. URI vs. IRI
Web service vs. Web resource
Linked Data vs. Linked Data Platform
REST API standards vs. REST description languages
Role of OSLC
Part 3 - Open Services for Lifecycle Collaboration (OSLC)
Overview of OSLC Core specification
Common OSLC misconceptions
Overview of existing OSLC solutions
Part 4 - OSLC Domain-specific Specifications
Basics of Resource Description Framework (RDF)
RDF syntaxes
Role of RDF vocabularies
RDF constraints defined through OSLC resource shapes
OSLC domain-specific specifications
OSLC Tracked Resource Set specification
Part 5 Standards for Web-based Data Integration
Motivation for new generation of web-compatible standards
Open world vs closed world modeling paradigms and impact on standards
Converting existing standards into web-compatible standards
Overview of communities defining web-compatible standards
Day 2
Part 6 - Related Technologies
Graph databases
Graph query languages
Future Linked Data Applications
Scalable and real-time Linked Data integration solutions
Configuration management
Part 7 - OSLC Adapter Introduction
How to install and run OSLC Bugzilla adapter
Overview of OSLC adapter architecture
Tracked Resource Set
Part 8 - Apache Jena
Creating/Writing/Reading RDF
Adding/Deleting RDF into/from RDF store
Using SPARQL HTTP endpoint
Part 9 - OSLC Adapter Tutorial
Web application architecture
Java servlets to create/read/delete OSLC resources
HTML and Java Server Pages (JSP)
Caching with HTTP ETags
Part 10 - Conversion of data into RDF
OSLC4J annotations
Data model in Ecore
Eclipse Modeling Framework
Generation of OSLC-annotated Java classes + Generation of OSLC specifications
OSLC adapter generators
Part 11 - Overview of open-source OSLC solutions
Solutions at Eclipse Lyo
Solutions at Github/ld4mbse
Solutions at Github/koneksys
{{ titleheadline title="OSLC Adapter Tutorial Code" }}
The OSLC adapter tutorial code will be shared to developers through a local Git repository packaged as zip file. Each new OSLC adapter concept is introduced through a specific Git commit. By using Git, developers can go back and forth in the history of the development of an OSLC adapter, or choose to start from scratch.
Open Services for Lifecycle Collaboration
{{ titleheadline title="Introduction" }}
Open Services for Lifecycle Collaboration (OSLC) consists of a Hypermedia API standard and RDF-based specifications for data interoperability.
figure class="margin"
img src="/img/CustomvsDiscoverable.svg" alt="A Standardized Web API with OSLC" /
/figure
{{ titleheadline title="Hypermedia API" }}
Hypermedia is experienced every day by humans accessing HTML-based web pages on the Web. The idea of hypermedia is to allow clients to discover additional content on the Web through links embedded within a Web resource such as a Web page. By hopping from page to page, humans can discover the Web without having to know in advance the URLs of every page they want to visit.
The same principle of hypermedia can also be applied to REST APIs. Clients accessing machine-readable content from REST APIs would also like to discover all available resources from a REST API without needing to know in advance all URLs of Web services and resources made available by that REST API.
For example, a client only needs to know the URL of the API entry point resource to discover all other resources through links embedded in the first top-level resource. The links can be “passive” links referring to other resources or “active” links allowing the client to change the state of resources.
{{ titleheadline title="Hypermedia API Standards" }}
Several Hypermedia API standards exist:
OSLC
W3C Linked Data Platform
W3C Hydra
smartAPI
OData
Hypermedia API standards are not to be confounded with REST API description languages such as OpenAPI Specification, RAML, RSDL, API Blueprint
{{ titleheadline title="OSLC Core Specification" }}
The OSLC Core Specification is a Hypermedia API standard which has been mostly adopted within the software and systems engineering domains.
figure class="small"
img src="/img/oslc-core-overview.png" alt="OSLC Core concepts and relationships" /
figcaptionOSLC Core concepts and relationships/figcaption
/figure
OSLC Core specification also covers following concepts:
Delegated dialogs for improved UI
Response Information (such as for resource paging, or for errors)
Selective Property Values
Reuse of existing domain-specific specifications (e.g. Dublin Core)
{{ titleheadline title="OSLC Domain-Specific Specifications" }}
REST APIs conforming to OSLC will represent data in machine-readable format RDF.
OSLC domain-specific specifications define the equivalent of schemas in RDF for enabling data interoperability. OSLC domain-specific specifications consist of RDF vocabularies and OSLC resource shapes.
RDF vocabularies are used to describe standardized resource types and properties. OSLC resource shapes are used to define constraints such as multiplicity constraints on properties of specific resource types.
Current OSLC domain-specific specifications cover mostly software and systems engineering concepts. For example, there is the OSLC Requirements Management Specification.
Consulting
We provide consulting services focusing on data integration with open standards for data traceability, data integration, data interoperability and system modeling
Data Anaysis and Visualization
What is Data Analysis and why visualize data?"
Data analysis is the entire process beginning with raw data and ending with useful information.
Typically, this involves activities on data such as pattern matching and mathematical operations. For example, it can be used to check if business goals are met or if data matches expectations. Data analysis is closely linked to data visualization due to the fact that results need to be visualized in order to be easily shared with other stakeholders.
How can you benefit from it?
While proprietary data analysis and data visualization solutions sometimes provide unique benefits, they are not necessary for all your scenarios. In many situations, you can use open standards and existing open source solutions to realize cost-effective and powerful data analysis and data visualization solutions. In multiple data analysis and data visualization scenarios, you get the most for your buck using open data standards and open source technologies.
Why Koneksys?
We are specialized in creating data analysis and data visualization solutions using the latest standards and open source technologies. We focus on using Linked Data World Wide Web (W3C) standards which are aligned with the latest developments in the web community. We have developed several data analysis and data visualization solutions for large engineering organizations. Data analysis and data visualization solutions for engineers are complex as the data and the processes are complex. We can apply the same proven analysis and visualization solutions for other application areas such as healthcare or the financial industry.
Finally, we do not only use standards, we contribute to the development of standards. We do not only use open source, we release many data integration solutions as open source. We are not tied to any solution vendor. Overall, we are your best partner for working on data analysis and data visualization solutions based on open standards and open technologies.
Data Integration
What is Data Integration?
Data integration is a succession of technical processes where your data is collected, merged, and finally delivered as useful insights into your business. The main questions that we solve are: in what format does the data need to be? In which database will the merged data be stored? How can the merged data be accessed and queried? Is the solution secure and does it meet performance expectations?
How can you benefit from it?
We live in a world with increasing data, with more and more databases, and more and more services publishing data. Data integration allows users to gain new insights from data originally saved in different sources. It helps users view data from a holistic perspective and make better decisions. It’s only through smart and efficient integration that we can fully benefit from this data explosion.
While proprietary data integration solutions sometimes provide unique benefits, they are not necessary for all your data integration scenarios. In many scenarios, you can use open standards and existing open source solutions to realize cost-effective and powerful data integration solutions. Get the most for your buck using open data standards and open source technologies.
Why Koneksys?
We are specialized in creating data integration solutions using the latest standards and open source technologies. We focus on using Linked Data World Wide Web (W3C) standards which are aligned with the latest developments in the web community.
We have developed several integration solutions for large engineering organizations in order to support data traceability and data consistency across different engineering disciplines and engineering models. The better the data is integrated, the easier it is to perform impact analysis, to conduct change management, to achieve product certification, and to support cross-disciplinary collaboration.
Linked Data Platform
What is a Linked Data Platform?
Linked Data Platform (LDP) is a W3C standard defining a set of rules for HTTP operations on web resources in order to provide an architecture for read-write Linked Data on the web. OSLC (Open Services for Lifecycle Collaboration) adapters for example support Linked Data Platform concepts. Clients can retrieve and update data on the web in a standardized way if data is made available on the Web in compliance with the W3C Linked Data Platform standard.
How can you benefit from it?"
Most Web APIs do not currently share a common structure. As a result, implementations of clients communicating with different Web APIs are different, leading to additional implementation effort for organizations having to develop, and maintain many clients talking to many different Web APIs. Fortunately, Web APIs publishing data according to the W3C Linked Data Platform standard have a common structure, so that client implementations for Linked Data Platform Web (LDP) APIs can largely be reused.
Why Koneksys?"
Koneksys has extensive experience in implementing OLSC adapters complying with Linked Data Platform concepts. Koneksys is a big supporter of W3C standards such as the W3C Linked Data Platform standard. We are going beyond the development of Linked Data Platform (LDP) Web APIs and are developing internally a web application called Koneksys Linked Data (KLD) to manage data from different LDP Web APIs such as different OSLC adapters. Koneksys Li nked Data (KLD) includes capabilities such as full-text search, SPARQL queries, data viewers and editors, for data originating from LDP Web APIs selected by the user. Future features will include version-management and optimized query capabilities for data originating from LDP Web APIs. Don’t hesitate to subscribe to our newsletter to stay updated!
Open Source Solutions
What is an Open Source Solution?
Open source solutions (or open source software) is computer software with its source code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. It can be developed in a collaborative public manner. Open source solution development from multiple independent sources generates an increasingly more diverse scope of design perspective than any one company is capable of developing and sustaining long term. A 2008 report by the Standish Group states that adoption of open source software models has resulted in savings of about $60B per year to consumers.
How can you benefit from it?
If it meets your security policies. This is why important software to run the World Wide Web or the largest ever online financial transaction are based on open source. Furthermore, you can perform software maintenance and updates at a cheaper cost. You do not depend on a single exclusive software vendor for software maintenance and updates. Instead, you can rely on a larger open community of experts who can provide similar services at a cheaper rate.
Why Koneksys?
Koneksys has developed several data integration, data analysis, and data visualization solutions released as open source. We are proud of what we do, and we are happy to show it. By looking at the open source solutions created by Koneksys, you can easily verify that we have the expertise that is needed for your projects. Koneksys is also interested in expanding the community around open source solutions by connecting persons with similar interests.
List of open source OSLC and Linked Data integration solutions developed by Koneksys
OSLC Adapter for Simulink
OSLC Adapter for MagicDraw SysML
OSLC Adapter for AMESim
OSLC Adapter for PTC Integrity
OSLC Adapter for Jena TDB
Support
Reuse, extend, and maintain open-source data integration solutions with the help of Koneksys’ experts
Training
We provide training services focusing on data integration with open standards for data traceability, data integration, data interoperability, and system modeling : learn from our experts!