Head to the arcade in your Linux terminal with this Pac-Man clone

1 day 1 hour ago

Welcome back to another day of the Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what command-line toys are all about. Basically, they're games and simple diversions that help you have fun at the terminal.

Some are new, and some are old classics. We hope you enjoy.


read more
Jason Baker

Red Route: A framework for progressively decoupled Drupal: Introducing the SPALP module

1 day 2 hours ago

This article was originally posted on the Capgemini Engineering blog

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I'm proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as "progressively decoupled Drupal". Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn't need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we've previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I'm very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly "un-Drupally" way.

On our old Drupal 6 projects, the text was stored in a separate 'Messages' node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn't allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven't thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you're building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I'd love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

Tags:  Capgemini development Drupal Javascript open source All tags

OpenSense Labs: SCORM and E-Learning. Can Drupal Fit In?

1 day 21 hours ago
SCORM and E-Learning. Can Drupal Fit In? Vasundhra Fri, 12/14/2018 - 17:32

Referred to as the de facto standard of e-learning, Shareable Content Object Reference Model aka SCORM was sponsored by US Department of Defense to bring uniformity in the standards of procuring both training content and Learning Management Systems. 

Long gone but not forgotten are those days when learning was only limited to books and classrooms. With the development of technology, virtual learning has transformed into an approachable and convenient method.

Can Drupal, which is a widely popular CMS for education websites, conform to SCORM standards? How does it ensure that it remains SCORM compliant? 


In Details - What is SCORM?

SCORM is a set of standard guidelines and specifications for the programmers on how to create LMS and training content to be shared across systems. 

The agenda to bring SCORM was to create standard units of training and educational material to be shared and reused across systems. 
                           


Shareable Content Object refers to creating units of online training material that can be shared and reused across systems and contexts.

Reference Model refers to the existing standards in the education industry while informing developers on how to properly use them together.

Working with the authoring tools to design and produce the content, e-learning professionals, training managers, and instructional designers are the ones who typically use SCORM packages.

Content (used in courses and LMS) is exported to a SCORM package (.zip folder) to deliver the learners a seamless and smooth upload of the content.

The Evolution of SCORM

Since SCORM wasn’t built as a standard from the ground up and was primarily a reference to the existing ones, the goal was to create an interoperable system that will work well with other systems. 

Till date, there are three released versions of SCORM, each built on top of the previous one solving the problem of its predecessor.

SCORM 1.0 was merely a draft outline of the framework. It did not include any fully implementable specifications but rather contained a preview of work which was yet to come. 

SCORM 1.0 included the core elements that would become the foundation of SCORM.  

In other words, this version specified how the content should be packaged. How content should communicate to systems and how the content should be described.

  • SCORM 1.1

SCORM 1.1 was the first implementable version of SCORM. It marked the end of the trial implementation phase and the beginning of the application phase for ADL. 

  • SCORM 1.2

SCORM 1.2 solved the many problems that came along version 1.1. It provided with robust and implementable specifications, this version presented its end users with drastic cost savings. 

It was and still remains one of the most widely used version.

  • SCORM 2004 (1st - 4th edition)

The 2004 1st edition allowed content vendors to create navigation rules between SCOs. The 2nd edition covered the various shortcomings of the 1st. It brought with it Advanced Distributed Learning which focused on developing and assessing the distributed learning prototypes, enabling more effective, efficient, & affordable learner-centric solutions.

The 3rd edition removed any ambiguity, improving the sequencing specifications for greater interoperability.

The final and 4th edition was focused on disambiguation and addition of new sequencing specifications. These specifications widened the options available to the content authors which made the creation of sequenced content even more simple.
 


Why Should You Use SCORM?

Now that we have an idea about SCORM and its attempt of reducing chaos in the entire industry, let’s know what benefits it brings along. 

Here are some of the reasons that can contribute to a huge factor in terms of using SCORM.

  • It is a pro-consumer initiative. The online courses are eligible to be used on any compliant LMS vendor. You can alternatively upload the courses to LMS as long as you have a zip folder.
  • All the high-quality LMSs and the authoring tools are SCORM compliant so that they can build and be part of a great ecosystem of interoperability and reliability.
  • The introduction and evolution in SCORM have brought about a great reduction in overall cost of delivering training. The reason is that it has no additional cost for integrating any type of content. 
  • SCORM helps in standardizing eLearning specifications. SCORM provides a set of technical specifications that gives the developers a standard blueprint to work with.
How does SCORM Work?

Other than guiding the programmers, SCORM administers two main things, i.e packaging content and exchanging data at runtime to ensure workability. 

  • Packaging content or content aggregation model (CAM) defines how a piece of content should be presented in a physical sense. It is required by the LMS to export and import a launch content without the use of any human interventions
  • Runtime communication or data exchange helps in defining how the content is supposed to work with the LMS while it is actually being played. This is the part which describes the delivery and tracking of the content. Eventually, these are the things that include “request the learner’s name” or “tell the LMS that the learner scored 95% in a test”. 
“SCORM recommends contents to be delivered in a self-contained directory or a ZIP file.”
Working of SCORM Packages

SCORM recommends contents to be delivered in a self-contained directory or a ZIP file. These files contain content, defined by the SCORM standards and is called Package Interface File (PIF) or in other words SCORM packages. 

It contains all the files that are needed to be delivered in the content packages via SCORM runtime environment. 

Course manifest files are considered as the heart of the SCORM content packaging system. The manifest is considered as the XML file that describes the content. 

Some of the pieces involved in the packaging are:

  • Resources 

Resources are the list of parts that bundle up to be a single course. There are two types of resources that contribute to the course.

The first is the collection of one or more files that make up a logical unit presented to the users. The other is SCO or Sharable Content Object which is the unit of instructions that are composed of one or more files, to communicate with LMS. It mostly contains the instructional or static part of a content that is presented to the users via course. 

Resources should contain a complete list of all the files that are required for proper functionality of the resources. 

This is done to port the list to a new environment and function it the similar way. 

 

  • Organizations

Organizations are considered as the logical grouping of the parts of resources into a hierarchical arrangement. This is what is delivered to a particular learner when the item has been selected. 

  • Metadata 

Metadata are used to describe elements of a content package in its manifest file. They are important because they facilitate the discovery of learning resources across content package or in a repository. 

When a learning resource is intended to be reusable, it is a best practice to describe it with metadata. 

For describing learning content, Learning Object Metadata contains many predefined fields.   
  • Sequencing

Sequencing is responsible for determining what happens next when a learner exits an SCO. With navigational control, it orchestrates the flow and status of the course as a whole. 

However, it doesn’t affect how SCOs operate and navigate internally, that is defined by the content developer.

Drupal With SCORM 

Drupal is best at managing the digital content, but the task of planning, implementing, and assessing a specific learning process can be best done by an LMS.

How can Drupal become a platform for an organization that delivers effective training, manage learners, individual progress and record results?

Since Drupal is not an LMS, its distributions and modules help it become more effective. When it comes to SCORM compliance, Drupal has Opigno LMS as its core distribution.  

Opigno LMS is a Drupal distribution that integrates H5P technology (an open-source content collaboration framework based on javascript), which enable you to create rich interactive training content. It allows you to maintain the training paths that are organized in courses and lessons. 

This distribution includes the latest version of Opigno core that offers you effective and innovative online training tools.

Opigno LMS is fully compliant with SCORM (1.2 and 2004 v3) which offers a powerful editor for content management, in particular, to create course material. These courses can eventually be grouped into classes to provide easy and manageable training paths. It should also be noted that this distribution is the quickest way to present a functional e-learning platform out of the box, with the users, courses, certificates, etc. 

Based on this distribution, Opigno SCORM implements the SCORM feature in Opigno which allows you to load and play SCORM packages within Opigno training and is also responsible to handle and manage training paths that are organized in courses and lessons. 

Opigno LMS comprises an app store that also enables you to install latest features easily, without asking you to upgrade the current install. 

According to the requirements and expectations of the learners, Opigno LMS can be summarized by the following specification:

  1. Scalable to manage the hardships of a dynamic and modifying environment
  2. Safe and easy to update
  3. Support further development of customized functionalities with proper integration with the core solution in a modular way
  4. Open to letting each client be free and independent
  5. And most importantly, easy integration with other enterprise systems 

H5P javascript framework makes it easy to create, share and reuse HTML5 content and applications, allowing users to built richer content. With the use of H5P, the authors can edit and construct videos, presentation games, advertisement etc. To create an e-learning platform, the integration of HP5 framework and SCORM is essential.  


H5P SCORM/xAPI module allows to upload and view SCROM and xAPI packages. It uses two HP5 libraries namely (HP5 libraries are used to create and share rich content and applications)

  1. H5P SCORM/xAPI library to view SCORM package.
  2. H5PEditor SCORM library to upload and validate SCORM package.

You can create a new content type by uploading it in the preceding step of a process using the H5P editor.

In the nutshell

Different people adopt SCORM for different reasons. You and your team are the only ones that can decide whether sticking to SCORM is worthwhile or not. 

Depending upon the nature of your requirement and the course of action, it can be decided which platform is best for you. At OpenSense labs, we have been giving adequate solutions to our customers. Contact us on hello@opensenselabs.com to make the right decision on the correct choice of a platform. 

blog banner blog image Drupal Drupal 8 SCORM LMS Learning Management System Shareable Content Object Reference Model SCORM 1.0 SCORM 1.1 SCORM 1.2 SCORM 2004 E-learning Content Aggregation Model Organization Sequencing Resource Opigno LMS Blog Type Articles Is it a good read ? On

The Linux terminal is no one-trick pony

2 days 1 hour ago

Welcome to another day of the Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.

Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.


read more
Jason Baker

Protecting the world’s oceans with open data science

2 days 1 hour ago

For environmental scientists, researching a single ecosystem or organism can be a daunting task. The amount of data and literature to comb through (or create) is often overwhelming.

So how, then, can environmental scientists approach studying the health of the world’s oceans? What ocean health means is a big question in itself—oceans span millions of square miles, are home to countless species, and border hundreds of countries and territories, each of which has its own unique marine policies and practices.


read more
juliesquid

Capgemini Engineering: A framework for progressively decoupled Drupal

2 days 9 hours ago

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I’m proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as “progressively decoupled Drupal”. Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn’t need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we’ve previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I’m very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly “un-Drupally” way.

On our old Drupal 6 projects, the text was stored in a separate ‘Messages’ node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn’t allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven’t thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you’re building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I’d love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

A framework for progressively decoupled Drupal was originally published by Capgemini at Capgemini Engineering on December 14, 2018.

Wim Leers: State of JSON:API (December 2018)

2 days 20 hours ago

Gabe, Mateu and I just released the third RC of JSON:API 2, so time for an update! The last update is from three weeks ago.

What happened since then? In a nutshell:

RC3

Curious about RC3? RC2 → RC3 has five key changes:

  1. ndobromirov is all over the issue queue to fix performance issues: he fixed a critical performance regression in 2.x vs 1.x that is only noticeable when requesting responses with hundreds of resources (entities); he also fixed another performance problem that manifests itself only in those circumstances, but also exists in 1.x.
  2. One major bug was reported by dagmar: the ?filter syntax that we made less confusing in RC2 was a big step forward, but we had missed one particular edge case!
  3. A pretty obscure broken edge case was discovered, but probably fairly common for those creating custom entity types: optional entity reference base fields that are empty made the JSON:API module stumble. Turns out optional entity reference fields get different default values depending on whether they’re base fields or configured fields! Fortunately, three people gave valuable information that led to finding this root cause and the solution! Thanks, olexyy, keesee & caseylau!
  4. A minor bug that only occurs when installing JSON:API Extras and configuring it in a certain way.
  5. Version 1.1 RC1 of the JSON:API spec was published; it includes two clarifications to the existing spec. We already were doing one of them correctly (test coverage added to guarantee it), and the other one we are now complying with too. Everything else in version 1.1 of the spec is additive, this is the only thing that could be disruptive, so we chose to do it ASAP.

So … now is the time to update to 2.0-RC3. We’d love the next release of JSON:API to be the final 2.0 release!

P.S.: if you want fixes to land quickly, follow dagmar’s example:

If you don't know how to fix a bug of a #drupal module, providing a failing test usually is really helpful to guide project maintainers. Thanks! @GabeSullice and @wimleers for fixing my bug report https://t.co/bEkkjSrE8U

— Mariano D'Agostino (@cuencodigital) December 11, 2018
  1. Note that usage statistics on drupal.org are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  2. Since we’re in the RC phase, we’re limiting ourselves to only critical issues. ↩︎

  3. This is the first officially proposed JSON:API profile! ↩︎

Relax by the fire at your Linux terminal

3 days 1 hour ago

Welcome back. Here we are, just past the halfway mark at day 13 of our 24 days of Linux command-line toys. If this is your first visit to the series, see the link to the previous article at the bottom of this one, and take a look back to learn what it's all about. In short, our command-line toys are anything that's a fun diversion at the terminal.

Maybe some are familiar, and some aren't. Either way, we hope you have fun.


read more
Jason Baker

One developer's road: Programming and mental illness

3 days 1 hour ago

In early 1997, my dad bought a desktop PC pre-installed with Microsoft Windows 98. An 11-year-old elementary school student at the time, I started learning the applications. Six months later, we got internet access using a dial-up modem, and I learned the basics of accessing the World Wide Web and discovered Netscape Navigator.


read more
joel2001k