Red Route: A framework for progressively decoupled Drupal: Introducing the SPALP module

1 day 1 hour ago

This article was originally posted on the Capgemini Engineering blog

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I'm proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as "progressively decoupled Drupal". Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn't need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we've previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I'm very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly "un-Drupally" way.

On our old Drupal 6 projects, the text was stored in a separate 'Messages' node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn't allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven't thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you're building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I'd love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

Tags:  Capgemini development Drupal Javascript open source All tags

OpenSense Labs: SCORM and E-Learning. Can Drupal Fit In?

1 day 20 hours ago
SCORM and E-Learning. Can Drupal Fit In? Vasundhra Fri, 12/14/2018 - 17:32

Referred to as the de facto standard of e-learning, Shareable Content Object Reference Model aka SCORM was sponsored by US Department of Defense to bring uniformity in the standards of procuring both training content and Learning Management Systems. 

Long gone but not forgotten are those days when learning was only limited to books and classrooms. With the development of technology, virtual learning has transformed into an approachable and convenient method.

Can Drupal, which is a widely popular CMS for education websites, conform to SCORM standards? How does it ensure that it remains SCORM compliant? 


In Details - What is SCORM?

SCORM is a set of standard guidelines and specifications for the programmers on how to create LMS and training content to be shared across systems. 

The agenda to bring SCORM was to create standard units of training and educational material to be shared and reused across systems. 
                           


Shareable Content Object refers to creating units of online training material that can be shared and reused across systems and contexts.

Reference Model refers to the existing standards in the education industry while informing developers on how to properly use them together.

Working with the authoring tools to design and produce the content, e-learning professionals, training managers, and instructional designers are the ones who typically use SCORM packages.

Content (used in courses and LMS) is exported to a SCORM package (.zip folder) to deliver the learners a seamless and smooth upload of the content.

The Evolution of SCORM

Since SCORM wasn’t built as a standard from the ground up and was primarily a reference to the existing ones, the goal was to create an interoperable system that will work well with other systems. 

Till date, there are three released versions of SCORM, each built on top of the previous one solving the problem of its predecessor.

SCORM 1.0 was merely a draft outline of the framework. It did not include any fully implementable specifications but rather contained a preview of work which was yet to come. 

SCORM 1.0 included the core elements that would become the foundation of SCORM.  

In other words, this version specified how the content should be packaged. How content should communicate to systems and how the content should be described.

  • SCORM 1.1

SCORM 1.1 was the first implementable version of SCORM. It marked the end of the trial implementation phase and the beginning of the application phase for ADL. 

  • SCORM 1.2

SCORM 1.2 solved the many problems that came along version 1.1. It provided with robust and implementable specifications, this version presented its end users with drastic cost savings. 

It was and still remains one of the most widely used version.

  • SCORM 2004 (1st - 4th edition)

The 2004 1st edition allowed content vendors to create navigation rules between SCOs. The 2nd edition covered the various shortcomings of the 1st. It brought with it Advanced Distributed Learning which focused on developing and assessing the distributed learning prototypes, enabling more effective, efficient, & affordable learner-centric solutions.

The 3rd edition removed any ambiguity, improving the sequencing specifications for greater interoperability.

The final and 4th edition was focused on disambiguation and addition of new sequencing specifications. These specifications widened the options available to the content authors which made the creation of sequenced content even more simple.
 


Why Should You Use SCORM?

Now that we have an idea about SCORM and its attempt of reducing chaos in the entire industry, let’s know what benefits it brings along. 

Here are some of the reasons that can contribute to a huge factor in terms of using SCORM.

  • It is a pro-consumer initiative. The online courses are eligible to be used on any compliant LMS vendor. You can alternatively upload the courses to LMS as long as you have a zip folder.
  • All the high-quality LMSs and the authoring tools are SCORM compliant so that they can build and be part of a great ecosystem of interoperability and reliability.
  • The introduction and evolution in SCORM have brought about a great reduction in overall cost of delivering training. The reason is that it has no additional cost for integrating any type of content. 
  • SCORM helps in standardizing eLearning specifications. SCORM provides a set of technical specifications that gives the developers a standard blueprint to work with.
How does SCORM Work?

Other than guiding the programmers, SCORM administers two main things, i.e packaging content and exchanging data at runtime to ensure workability. 

  • Packaging content or content aggregation model (CAM) defines how a piece of content should be presented in a physical sense. It is required by the LMS to export and import a launch content without the use of any human interventions
  • Runtime communication or data exchange helps in defining how the content is supposed to work with the LMS while it is actually being played. This is the part which describes the delivery and tracking of the content. Eventually, these are the things that include “request the learner’s name” or “tell the LMS that the learner scored 95% in a test”. 
“SCORM recommends contents to be delivered in a self-contained directory or a ZIP file.”
Working of SCORM Packages

SCORM recommends contents to be delivered in a self-contained directory or a ZIP file. These files contain content, defined by the SCORM standards and is called Package Interface File (PIF) or in other words SCORM packages. 

It contains all the files that are needed to be delivered in the content packages via SCORM runtime environment. 

Course manifest files are considered as the heart of the SCORM content packaging system. The manifest is considered as the XML file that describes the content. 

Some of the pieces involved in the packaging are:

  • Resources 

Resources are the list of parts that bundle up to be a single course. There are two types of resources that contribute to the course.

The first is the collection of one or more files that make up a logical unit presented to the users. The other is SCO or Sharable Content Object which is the unit of instructions that are composed of one or more files, to communicate with LMS. It mostly contains the instructional or static part of a content that is presented to the users via course. 

Resources should contain a complete list of all the files that are required for proper functionality of the resources. 

This is done to port the list to a new environment and function it the similar way. 

 

  • Organizations

Organizations are considered as the logical grouping of the parts of resources into a hierarchical arrangement. This is what is delivered to a particular learner when the item has been selected. 

  • Metadata 

Metadata are used to describe elements of a content package in its manifest file. They are important because they facilitate the discovery of learning resources across content package or in a repository. 

When a learning resource is intended to be reusable, it is a best practice to describe it with metadata. 

For describing learning content, Learning Object Metadata contains many predefined fields.   
  • Sequencing

Sequencing is responsible for determining what happens next when a learner exits an SCO. With navigational control, it orchestrates the flow and status of the course as a whole. 

However, it doesn’t affect how SCOs operate and navigate internally, that is defined by the content developer.

Drupal With SCORM 

Drupal is best at managing the digital content, but the task of planning, implementing, and assessing a specific learning process can be best done by an LMS.

How can Drupal become a platform for an organization that delivers effective training, manage learners, individual progress and record results?

Since Drupal is not an LMS, its distributions and modules help it become more effective. When it comes to SCORM compliance, Drupal has Opigno LMS as its core distribution.  

Opigno LMS is a Drupal distribution that integrates H5P technology (an open-source content collaboration framework based on javascript), which enable you to create rich interactive training content. It allows you to maintain the training paths that are organized in courses and lessons. 

This distribution includes the latest version of Opigno core that offers you effective and innovative online training tools.

Opigno LMS is fully compliant with SCORM (1.2 and 2004 v3) which offers a powerful editor for content management, in particular, to create course material. These courses can eventually be grouped into classes to provide easy and manageable training paths. It should also be noted that this distribution is the quickest way to present a functional e-learning platform out of the box, with the users, courses, certificates, etc. 

Based on this distribution, Opigno SCORM implements the SCORM feature in Opigno which allows you to load and play SCORM packages within Opigno training and is also responsible to handle and manage training paths that are organized in courses and lessons. 

Opigno LMS comprises an app store that also enables you to install latest features easily, without asking you to upgrade the current install. 

According to the requirements and expectations of the learners, Opigno LMS can be summarized by the following specification:

  1. Scalable to manage the hardships of a dynamic and modifying environment
  2. Safe and easy to update
  3. Support further development of customized functionalities with proper integration with the core solution in a modular way
  4. Open to letting each client be free and independent
  5. And most importantly, easy integration with other enterprise systems 

H5P javascript framework makes it easy to create, share and reuse HTML5 content and applications, allowing users to built richer content. With the use of H5P, the authors can edit and construct videos, presentation games, advertisement etc. To create an e-learning platform, the integration of HP5 framework and SCORM is essential.  


H5P SCORM/xAPI module allows to upload and view SCROM and xAPI packages. It uses two HP5 libraries namely (HP5 libraries are used to create and share rich content and applications)

  1. H5P SCORM/xAPI library to view SCORM package.
  2. H5PEditor SCORM library to upload and validate SCORM package.

You can create a new content type by uploading it in the preceding step of a process using the H5P editor.

In the nutshell

Different people adopt SCORM for different reasons. You and your team are the only ones that can decide whether sticking to SCORM is worthwhile or not. 

Depending upon the nature of your requirement and the course of action, it can be decided which platform is best for you. At OpenSense labs, we have been giving adequate solutions to our customers. Contact us on hello@opensenselabs.com to make the right decision on the correct choice of a platform. 

blog banner blog image Drupal Drupal 8 SCORM LMS Learning Management System Shareable Content Object Reference Model SCORM 1.0 SCORM 1.1 SCORM 1.2 SCORM 2004 E-learning Content Aggregation Model Organization Sequencing Resource Opigno LMS Blog Type Articles Is it a good read ? On

Capgemini Engineering: A framework for progressively decoupled Drupal

2 days 8 hours ago

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I’m proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as “progressively decoupled Drupal”. Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn’t need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we’ve previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I’m very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly “un-Drupally” way.

On our old Drupal 6 projects, the text was stored in a separate ‘Messages’ node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn’t allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven’t thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you’re building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I’d love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

A framework for progressively decoupled Drupal was originally published by Capgemini at Capgemini Engineering on December 14, 2018.

Wim Leers: State of JSON:API (December 2018)

2 days 19 hours ago

Gabe, Mateu and I just released the third RC of JSON:API 2, so time for an update! The last update is from three weeks ago.

What happened since then? In a nutshell:

RC3

Curious about RC3? RC2 → RC3 has five key changes:

  1. ndobromirov is all over the issue queue to fix performance issues: he fixed a critical performance regression in 2.x vs 1.x that is only noticeable when requesting responses with hundreds of resources (entities); he also fixed another performance problem that manifests itself only in those circumstances, but also exists in 1.x.
  2. One major bug was reported by dagmar: the ?filter syntax that we made less confusing in RC2 was a big step forward, but we had missed one particular edge case!
  3. A pretty obscure broken edge case was discovered, but probably fairly common for those creating custom entity types: optional entity reference base fields that are empty made the JSON:API module stumble. Turns out optional entity reference fields get different default values depending on whether they’re base fields or configured fields! Fortunately, three people gave valuable information that led to finding this root cause and the solution! Thanks, olexyy, keesee & caseylau!
  4. A minor bug that only occurs when installing JSON:API Extras and configuring it in a certain way.
  5. Version 1.1 RC1 of the JSON:API spec was published; it includes two clarifications to the existing spec. We already were doing one of them correctly (test coverage added to guarantee it), and the other one we are now complying with too. Everything else in version 1.1 of the spec is additive, this is the only thing that could be disruptive, so we chose to do it ASAP.

So … now is the time to update to 2.0-RC3. We’d love the next release of JSON:API to be the final 2.0 release!

P.S.: if you want fixes to land quickly, follow dagmar’s example:

If you don't know how to fix a bug of a #drupal module, providing a failing test usually is really helpful to guide project maintainers. Thanks! @GabeSullice and @wimleers for fixing my bug report https://t.co/bEkkjSrE8U

— Mariano D'Agostino (@cuencodigital) December 11, 2018
  1. Note that usage statistics on drupal.org are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  2. Since we’re in the RC phase, we’re limiting ourselves to only critical issues. ↩︎

  3. This is the first officially proposed JSON:API profile! ↩︎

Drupal Association blog: Introducing the (unofficial) Drupal Recording Initiative

3 days 10 hours ago

This is a guest blog by Kevin Thull (kthull).

Shout-out to Matt Westgate of Lullabot, who I met earlier this year at DrupalCorn Camp in Des Moines for casually referring to what I’ve been doing for the past five years as the “Drupal Recording Initiative.” It was yet another one of those moments when I realized that what I’ve been doing for the past five years is much bigger than just me.

Let’s recap

What began in 2013 as an effort to record sessions for camps in my local community became a passion project and a way for me to help a handful of other camps record their sessions. That was the extent of my “vision” for this project.

With the advent of Slack (specifically the Drupal Camp Organizer Slack) it became even easier for camp organizers to discover and request my services, and suddenly I was recording a dozen-plus camps per year. With more robust documentation and enough inventory to ship kits to camps that I can’t attend, I have nearly complete coverage of camps in North America.

Turning point

With this year’s milestones, funding, and recognition (and all the publicity that came with it), conversations at camps turned more toward “what’s next” and “how do you grow this” rather than “how and why.”

As I started to think along the lines of future-proofing, open-sourcing, and growth, I’ve made some steps in the right direction this past year (again with no goals or plan):

  • Tweaks to the kits and troubleshooting for more reliable recording
  • Docs moved from Google Drive to GitHub for discoverability and collaboration
  • A growing contributor list: basically anyone who has helped me in the trenches, expressed interest in doing so, recorded sessions with my equipment, or has their own set of equipment based on my setup.
The initiative

This is rough, and the point of this post is to get input from the community.

Purpose

Make the recording of Drupal and related talks at camps, summits, meetups (essentially anything outside of DrupalCon) easy, turn-key, affordable, and available.

Roadmap

The first five years evolved organically and overall successfully, but without a plan. Following are the goals for the next five years.

Training and mentorship

I’ve proven that I can do this and do it very well. My goal is to spend an increasing amount of time teaching and supporting others to repeat my success.

  • Improved camp support – I already offer support for a few camps, primarily via email and Slack before and during events; need to schedule more check-ins during events
  • Post event – Schedule post-event calls to debrief and discover gaps in documentation and training
  • Ongoing solicitation for contributors – Identify and somehow organize a group of people that can manage the recordings whether I’m on-site or not to continually spread the knowledge and coverage; the goal here is one new contact per event

Expanded coverage

While it sounds impressive to directly or indirectly record nearly all North American camps, it’s not enough. There are many more events than just camps, and there’s so much more to the world than North America.

  • Shipping kits – I’ve shipped to two events in 2018. The goal is to double that each successive year until there is sufficient global coverage of camps and larger events; smaller events like meetups would need to be covered by local equipment hubs, detailed below
  • Funding for equipment hubs – Navigating customs can prove tricky and equipment hubs within countries or regions would mitigate that risk; possible sources include crowdsourced funding or the Drupal Community Cultivation Grant (more info on this toward the end of the post)
  • Expanding beyond Drupal events – An early goal was to make a device-agnostic recording solution, so it only makes sense that it also should be content-agnostic; primary focus would be adjunct communities: WordPress, PHP, Symfony, Javascript, etc.

Improved documentation

Moving existing docs to GitHub was a step in the right direction. More visuals are needed.

  • Record a setup video
  • Record a troubleshooting video
  • Add more pics and diagrams
  • Create docs for speakers: what to bring, what to do, what not to do, how to optimize your laptop for presenting, etc.
  • Create docs for organizers: what is provided, what is needed from the venue or camp, what speakers need to know, what room monitors need to know, etc.

Higher recording success rate

In 2018, the capture rate based on my documentation and remote support was 80-85% versus 92-100% when I was on the scene. The goal here is to bridge that gap.

  • Better pre-event support and onboarding
  • Broader support for USB-c
  • Improved coverage for non-Mac audio issues
  • Better prep for volunteers and contributors for on-site coverage

Streamlined funding

This is an expensive endeavor, and I couldn’t do it without camp donations, and reimbursements for airfare and lodging. At the same time, incidental costs (food, entertainment, commuting, etc.) add up, and the current model limits me geographically.

  • Charge a flat fee of $1,000 per event (airfare and lodging typically runs $350-$750)
  • Roll surplus funds into new equipment and subsidized travel to events outside North America
  • Maintain existing crowdfunded campaign to cover personal costs (whether current GoFundMe or an alternate)
  • Potentially seek sponsorship or Drupal Community Cultivation Grant (again, see below)

Overall organization

This is definitely the squishiest part of the roadmap. Solo, this is easy to manage. But to grow, I need tools and I don’t know the best ones for this type of work.

  • Contacts – What is the best way to manage the growing list of contributors, as well as give recognition?
  • Scheduling – There should be an event calendar with ways to sign up as a recording volunteer
  • Accounting – There should be an open way to manage the funds
  • Outreach – What is the best way to publicly and continually reach out for new contributors, and to grow the base of recorded events?
  • Inventory – We need a managed inventory of equipment, who controls them, whether they are available or on loan, their condition, etc.
  • Options:

    1. Drupal.org community project
    2. GitHub project board
    3. Slack channel (they already exist in Drupal Slack and the Organizer Slack)
    4. Public meetings
    5. Other / All of the above

Content discoverability

I am not the only person recording sessions, but I am definitely the loudest and most prolific. But my tweets and siloed camp YouTube channels are not optimal for the greater community to find content. I don’t have the bandwidth to do this personally, but would contribute.

  • We absolutely need a Drupal equivalent to Wordpress.tv (several solutions are in the works, but that also has been the state of things for years)
  • Aggregate content from all events, including DrupalCon
  • Include curation and content authors to annotate various versions of the same talk, or even unpublish older or largely repetitive versions
  • Add tagging for searchability
  • Add captioning for accessibility
  • Promote local events as well as the Drupal project
Wrapping it up

I’m continually thanked and reminded of how important what I’m doing is for the community. At the same time, it’s hard for me because it feels pretty routine and relatively easy (aside from the unknowns that come with each venue setup, and the hourly hustle to connect presenters and confirm recordings). Yet recorded content is also how I first learned Drupal and it’s the very reason I began this effort.

So the next stage of this unexpected, unplanned success is to create a structure to prevent my own burnout and prevent this initiative from hitting a plateau. If you want to participate, hit me up on Drupal.org, Twitter, or send me an email.

Lastly, for those who don’t know, the Drupal Community Cultivation Grant exists for things like this. I am very grateful for the Drupal Association offering me a grant to support this program, though I hadn't yet applied (I should have and you should too).

Drupal blog: Plan for Drupal 9

3 days 13 hours ago

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

At Drupal Europe, I announced that Drupal 9 will be released in 2020. Although I explained why we plan to release in 2020, I wasn't very specific about when we plan to release Drupal 9 in 2020. Given that 2020 is less than thirteen months away (gasp!), it's time to be more specific.

Shifting Drupal's six month release cycle

We shifted Drupal 8's minor release windows so we can adopt Symfony's releases faster.

Before I talk about the Drupal 9 release date, I want to explain another change we made, which has a minor impact on the Drupal 9 release date.

As announced over two years ago, Drupal 8 adopted a 6-month release cycle (two releases a year). Symfony, a PHP framework which Drupal depends on, uses a similar release schedule. Unfortunately the timing of Drupal's releases has historically occurred 1-2 months before Symfony's releases, which forces us to wait six months to adopt the latest Symfony release. To be able to adopt the latest Symfony releases faster, we are moving Drupal's minor releases to June and December. This will allow us to adopt the latest Symfony releases within one month. For example, Drupal 8.8.0 is now scheduled for December 2019.

We hope to release Drupal 9 on June 3, 2020

Drupal 8's biggest dependency is Symfony 3, which has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. Therefore, we have to end-of-life Drupal 8 no later than November 2021. Or put differently, by November 2021, everyone should be on Drupal 9.

Working backwards from November 2021, we'd like to give site owners at least one year to upgrade from Drupal 8 to Drupal 9. While we could release Drupal 9 in December 2020, we decided it was better to try to release Drupal 9 on June 3, 2020. This gives site owners 18 months to upgrade. Plus, it also gives the Drupal core contributors an extra buffer in case we can't finish Drupal 9 in time for a summer release.

Planned Drupal 8 and 9 minor release dates.

We are building Drupal 9 in Drupal 8

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality.

Let's look at an example. As mentioned, Drupal 8 currently depends on Symfony 3. Our plan is to release Drupal 9 with Symfony 4 or 5. Symfony 5's release is less than one year away, while Symfony 4 was released a year ago. Ideally Drupal 9 would ship with Symfony 5, both for the latest Symfony improvements and for longer support. However, Symfony 5 hasn't been released yet, so we don't know the scope of its changes, and we will have limited time to try to adopt it before Symfony 3's end-of-life.

We are currently working on making it possible to run Drupal 8 with Symfony 4 (without requiring it). Supporting Symfony 4 is a valuable stepping stone to Symfony 5 as it brings new capabilities for sites that choose to use it, and it eases the amount of Symfony 5 upgrade work to do for Drupal core developers. In the end, our goal is for Drupal 8 to work with Symfony 3, 4 or 5 so we can identify and fix any issues before we start requiring Symfony 4 or 5 in Drupal 9.

Another example is our support for reusable media. Drupal 8.0.0 launched without a media library. We are currently working on adding a media library to Drupal 8 so content authors can select pre-existing media from a library and easily embed them in their posts. Once the media library becomes stable, we can deprecate the use of the old file upload functionality and make the new media library the default experience.

The upgrade to Drupal 9 will be easy

Because we are building Drupal 9 in Drupal 8, the technology in Drupal 9 will have been battle-tested in Drupal 8.

For Drupal core contributors, this means that we have a limited set of tasks to do in Drupal 9 itself before we can release it. Releasing Drupal 9 will only depend on removing deprecated functionality and upgrading Drupal's dependencies, such as Symfony. This will make the release timing more predictable and the release quality more robust.

For contributed module authors, it means they already have the new technology at their service, so they can work on Drupal 9 compatibility earlier (e.g. they can start updating their media modules to use the new media library before Drupal 9 is released). Finally, their Drupal 8 know-how will remain highly relevant in Drupal 9, as there will not be a dramatic change in how Drupal is built.

But most importantly, for Drupal site owners, this means that it should be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.

So what is the big deal about Drupal 9, then?

The big deal about Drupal 9 is … that it should not be a big deal. The best way to be ready for Drupal 9 is to keep up with Drupal 8 updates. Make sure you are not using deprecated modules and APIs, and where possible, use the latest versions of dependencies. If you do that, your upgrade experience will be smooth, and that is a big deal for us.

Special thanks to Gábor Hojtsy (Acquia), Angie Byron (Acquia), xjm(Acquia), and catch for their input in this blog post.

Jacob Rockowitz: Managing Drupal and Webform configuration

3 days 14 hours ago

Webforms in Drupal 8 are configuration entities, which means that they are exportable to YAML files and this makes it easy to transfer a webform from one server environment to another. Generally, anything that defines functionality or behavior in Drupal 8 is stored as simple configuration or a configuration entity. For example, fields, views, and roles are stored as configuration entities. Things that are considered 'content' are stored in the database as content entities. Content entities include nodes, comments, taxonomy terms, users, and also webform submissions.

Managing Configuration

The core concept behind Drupal's configuration management is you can export and import how a website is configured (aka how it works) from one environment to another environment. For example, we might want to copy the configuration from your staging server to your production server. Drupal 8 has initially taken the approach that all configuration from one environment needs to be moved to the new environment. The problem is that…

In the case of webforms and blocks, this is a major issue because site builders are continually updating these config entities on a production website. The Drupal community is aware of this problem - they have provided some solutions and are actively working to fix this challenge/issue in a future release of Drupal.

Improving Configuration Management

Dries Buytaert shared how we are improving Drupal's configuration management system and he notes that the currently recommended solutions are the Config Filter, Configuration Split, and Config Ignore modules.

Below is a summary...Read More

Dries Buytaert: Plan for Drupal 9

3 days 18 hours ago

At Drupal Europe, I announced that Drupal 9 will be released in 2020. Although I explained why we plan to release in 2020, I wasn't very specific about when we plan to release Drupal 9 in 2020. Given that 2020 is less than thirteen months away (gasp!), it's time to be more specific.

Shifting Drupal's six month release cycle We shifted Drupal 8's minor release windows so we can adopt Symfony's releases faster.

Before I talk about the Drupal 9 release date, I want to explain another change we made, which has a minor impact on the Drupal 9 release date.

As announced over two years ago, Drupal 8 adopted a 6-month release cycle (two releases a year). Symfony, a PHP framework which Drupal depends on, uses a similar release schedule. Unfortunately the timing of Drupal's releases has historically occurred 1-2 months before Symfony's releases, which forces us to wait six months to adopt the latest Symfony release. To be able to adopt the latest Symfony releases faster, we are moving Drupal's minor releases to June and December. This will allow us to adopt the latest Symfony releases within one month. For example, Drupal 8.8.0 is now scheduled for December 2019.

We hope to release Drupal 9 on June 3, 2020

Drupal 8's biggest dependency is Symfony 3, which has an end-of-life date in November 2021. This means that after November 2021, security bugs in Symfony 3 will not get fixed. Therefore, we have to end-of-life Drupal 8 no later than November 2021. Or put differently, by November 2021, everyone should be on Drupal 9.

Working backwards from November 2021, we'd like to give site owners at least one year to upgrade from Drupal 8 to Drupal 9. While we could release Drupal 9 in December 2020, we decided it was better to try to release Drupal 9 on June 3, 2020. This gives site owners 18 months to upgrade. Plus, it also gives the Drupal core contributors an extra buffer in case we can't finish Drupal 9 in time for a summer release.

Planned Drupal 8 and 9 minor release dates.We are building Drupal 9 in Drupal 8

Instead of working on Drupal 9 in a separate codebase, we are building Drupal 9 in Drupal 8. This means that we are adding new functionality as backwards-compatible code and experimental features. Once the code becomes stable, we deprecate any old functionality.

Let's look at an example. As mentioned, Drupal 8 currently depends on Symfony 3. Our plan is to release Drupal 9 with Symfony 4 or 5. Symfony 5's release is less than one year away, while Symfony 4 was released a year ago. Ideally Drupal 9 would ship with Symfony 5, both for the latest Symfony improvements and for longer support. However, Symfony 5 hasn't been released yet, so we don't know the scope of its changes, and we will have limited time to try to adopt it before Symfony 3's end-of-life.

We are currently working on making it possible to run Drupal 8 with Symfony 4 (without requiring it). Supporting Symfony 4 is a valuable stepping stone to Symfony 5 as it brings new capabilities for sites that choose to use it, and it eases the amount of Symfony 5 upgrade work to do for Drupal core developers. In the end, our goal is for Drupal 8 to work with Symfony 3, 4 or 5 so we can identify and fix any issues before we start requiring Symfony 4 or 5 in Drupal 9.

Another example is our support for reusable media. Drupal 8.0.0 launched without a media library. We are currently working on adding a media library to Drupal 8 so content authors can select pre-existing media from a library and easily embed them in their posts. Once the media library becomes stable, we can deprecate the use of the old file upload functionality and make the new media library the default experience.

The upgrade to Drupal 9 will be easy

Because we are building Drupal 9 in Drupal 8, the technology in Drupal 9 will have been battle-tested in Drupal 8.

For Drupal core contributors, this means that we have a limited set of tasks to do in Drupal 9 itself before we can release it. Releasing Drupal 9 will only depend on removing deprecated functionality and upgrading Drupal's dependencies, such as Symfony. This will make the release timing more predictable and the release quality more robust.

For contributed module authors, it means they already have the new technology at their service, so they can work on Drupal 9 compatibility earlier (e.g. they can start updating their media modules to use the new media library before Drupal 9 is released). Finally, their Drupal 8 know-how will remain highly relevant in Drupal 9, as there will not be a dramatic change in how Drupal is built.

But most importantly, for Drupal site owners, this means that it should be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.

So what is the big deal about Drupal 9, then?

The big deal about Drupal 9 is … that it should not be a big deal. The best way to be ready for Drupal 9 is to keep up with Drupal 8 updates. Make sure you are not using deprecated modules and APIs, and where possible, use the latest versions of dependencies. If you do that, your upgrade experience will be smooth, and that is a big deal for us.

Special thanks to Gábor Hojtsy (Acquia), Angie Byron (Acquia), xjm (Acquia), and catch for their input in this blog post.

OpenSense Labs: Build HIPAA-compliant Website with Drupal

3 days 21 hours ago
Build HIPAA-compliant Website with Drupal Shankar Wed, 12/12/2018 - 16:17

Healthcare. When you listen to this word, you get a feeling of ‘care’ and ‘improvement’ because that’s what the healthcare industry does. This industry cares for the people and works on the improvement of their health. And today’s growing number of healthcare providers, payers, and IT professionals need a HIPAA-compliant website that also ‘cares’ about processing, storing and transmitting protected health information.


Open source software align tremendously well with the requirements of the healthcare sector. Drupal, being an open source content management framework itself, aligns very well with Health IT interoperability and is great for commercial applications at the enterprise level.

What is HIPAA compliance?

The Health Insurance Portability and Accountability Act of 1996 (HIPPA) is a legislation that was implemented for the easy retention of healthcare insurance coverage that can be of huge significance whenever the US workers change or lose their jobs.

The Health Insurance Portability and Accountability Act of 1996 is a legislation that sets the standard for the protection of sensitive patient data

It sets the standard for the protection of sensitive patient data and any organisation that confronts with protected health information (PHI) has to make sure that all the required physical, network and process security measures are being adhered to. According to Amazon Web Services, PHI includes a very wide set of personally identifiable health and health-related data, including insurance and billing information, diagnosis data, clinical care data, and lab results such as images and test results.

It also encourages to use electronic health records (EHR) for the betterment of efficiency and quality of the US healthcare system via improved information sharing.

It encompasses covered entities (CE), anyone who offers treatment, payment and operations in healthcare and business associates, and people who can access patient information and offer support in treatment, payment or operations. Moreover, subcontractors or business associates of business associates should also be in compliance.

How to make your website HIPAA compliant?

In order to make sure that your website is HIPAA-compliant, you must start by establishing new processes. Make sure that the PHI is only accessible to authorised personnel in addition to establishing processes for deleting, backing up and restoring PHI as needed. Emails consisting of PHI should be sent in an encrypted and secure manner.

Moreover, it is of great significance that you partner with web hosting companies that are HIPAA compliant and have processes for protecting PHI. Also, sign a business associate contract with third parties who have access to your patient’s PHI.

It is of paramount importance to buy and implement an SSL certificate for your website and ensuring that all web forms on your site are encrypted and safe.

Use Case: Drupal ensures HIPAA compliance

Drupal, being one of the most security-focussed CMS, comes with stupendous database encryption mechanisms. For high-security applications, Drupal can be configured for a firm database encryption. When the whole database encryption is not desirable, top-notch granularity is available for safeguarding more specific information like user accounts, particular forms, and also the values of particular fields can be encrypted in an otherwise plaintext database.

Drupal’s encryption system is configurable to adhere to the norms of PCI, HIPAA and state privacy laws constituting offsite encryption key management

Drupal’s encryption system is configurable to adhere to the norms of Payment Card Industry (PCI), HIPAA and state privacy laws constituting offsite encryption key management. Drupal is also a spectacular solution as an enterprise-grade healthcare system because it can be extended. It is possible to leverage Drupal as a content dissemination network, intranet, or even to incorporate several systems within a single platform.

By integrating Drupal data layer and an electronic medical record (EMR) via a RESTful API connection could dramatically enhance interoperability. It can unlock the important data from the proprietary systems and their data silos.

Source: Acquia

Proprietary EMR systems are astronomical with their top-of-the-line standardised approaches and the ability to organise and store enormous amounts of data. But they are not great for customisation or interoperability. The dearth of interoperability of high-priced, single-vendor solutions and the hurdles while functioning in an integrated healthcare delivery setting results in HIPAA non-compliance. Even if the files are electronic, the difficulty in moving the files boundlessly leads to frequent violations like the shortage of legal authorisation or unencrypted emails.

Drupal and EMR integration is an excellent example of a Drupal-powered healthcare technology that empowers the staff to leverage critical and potentially life-saving data. Healthcare delivery systems, with the division of data silos, can witness the evolution from being a reactive diagnostic model to a proactive preventative model.

Drupal can be layered on top of multiple EMR systems within a medical group and the information can be compiled into one physician portal. The integration of Drupal and EMR systems can be made through numerous feeds from API calls, XML or JSON feeds and RESTful APIs.

Drupal offers granular user access control, that is, it can offers site administrators complete authority over who can see and who can modify different parts of a site. It operates on the basis of a system of extensible user roles and access permissions. Thus, with the help of role-based provisioning, Drupal can emphasise on critical data that dwells behind a firewall in a HIPAA secure environment.

Drupal can also be configured to look into the database via web services integration on the basis of specific EHR authorisation requirements. And it does so by adhering to the user access permission controls. Therefore, the data remains safe and secure all the time.

Conclusion

Drupal is a magnificent solution for enterprises in the healthcare sector to help them process, store and transmit protected health information.

We have been steadfast in our goals to deliver a great digital experience with our expertise in Drupal development.

Contact us at hello@opensenselabs.com to build HIPAA-compliant Drupal website.
 

blog banner blog image HIPAA-compliant website HIPAA Health Insurance Portability and Accountability Act HIPAA compliance Drupal 8 Blog Type Articles Is it a good read ? On

ComputerMinds.co.uk: Security risks as Drupal matures

3 days 22 hours ago

After reading this from Ars Technica, which describes how a developer offered to 'help' the maintainer of an NPM module - and then slowly introduced malicious code to it - I can't help but wonder if the Drupal community is vulnerable to the exact same issue. Let's discuss!

Please, don't touch my package

NPM modules have been hacked at before, and it's not pretty when it happens. Because of the way we use packages, it's a lot easier for nasty code to get sucked in to a LOT of applications before anyone notices. Attacks on the code 'supply chain', therefore, have tended to be high-profile and high-damage.

NPM is used as a source for a huge number of code projects, many of which use other bits of code from other NPM packages. Even moderate applications for PC, mobile or web can have hundreds or thousands of NPM packages pulled in. It's common for packages to depend on other packages, which depend on other packages, which need other packages, which require... you get the picture? There are so many fragments, layers and extra bits that NPM is used for, that the developers of the applications don't necessarily know all the packages that are being pulled in to their application. It's so easy to just type "npm require somefancypackageineed" without thinking and without vetting. NPM will just go and get everything for you, and you don't need to care.

That's how it should be, right? We should be able to just add code and know that it's safe, right? In a perfect world, that would be fine. But in reality there's an increasingly large amount of trust being given when you add a package to your application, and developers don't realise it. It's events like this that are making people aware again that they are including code in their projects that they either do not scrutinise or do not know exists.

Drupal's moment will come

Fortunately, Drupal is a little different to NPM. Whilst modules are often dependent on other modules, we tend to have a lot less layers going on. It's much easier to know what modules and dependencies you're adding in when you include a new module. But that doesn't mean we're immune.

This particular incident came about when a tired, busy module maintainer was approached and offered help. It's a classic social engineering hack.

"Sure, I'll help you! [mwahaha]"

What struck me was that Drupal probably has hundreds of module maintainers in similar circumstances. Put yourself in those shoes, for a moment:
- You maintain an old Drupal 7 module
- It has a few thousand sites using it still
- You're busy, don't have time for it anymore

If somebody offered to sort it all out for you, what would you say? I'm pretty sure most would be ecstatic! Hurrah! But how would you vet your new favourite person in the whole world, before making them a co-maintainer and giving them the keys to the kingdom?

Alternatively, what of this:
- There is an old module, officially unmaintained
- It still has users
- The maintainer cannot be contacted

Drupal has a system for allowing people to be made maintainers of modules, when the original maintainer cannot be contacted. How are these people vetted? I'm sure there's some sort of check, but what if it's not enough?

In particular, I want to point out that as Drupal 7 ages, there will be more and more old, unmaintained and unloved modules still used by thousands of sites. If we forget them and fail to offer them sufficient protection, they will become vulnerable to attacks just like this. Drupal's moment will come.

This is an open source issue

It would be rather very easy to run away screaming right now, having decided that open source technologies sound too dangerous. So I'll put in some positive notes!

That Drupal should be increasingly exposed to the possibility of social engineering and malevolent maintainers is no new issue. There are millions of open source projects out there, all exposed to exactly these issues. As the internet grows and matures and ages, these issues will become more and more common; how many projects out there have tired and busy maintainers?!

For now, though, it must be said that the open source communities of the world have done what few thought possible. We have millions of projects and developers around the world successfully holding onto their trusty foundations, Drupal included. Many governments, enterprises and organisations have embraced the open source way of working on the premise that although there is risk in working differently, there is great merit in the reward. To this day, open source projects continue to thrive and to challenge the closed-source world. It is the scrutiny and the care of the open source community that keeps it clear and safe. As long as we continue to support and love and use our open source communities and contributions, they will stay in good repair and good stead.

If you were thinking of building a Drupal site and are suddenly now questioning that decision, then a read of Drupal's security statement is probably worthwhile.

Know your cattle by name

The key mitigation for this risk, it should be said, is for developers to know what code is in their application. It's our job to care and so it's our job to be paranoid. But it's not always easy. How many times have you installed a module without checking every line of code? How many times have you updated a module without checking the diff in Git? It's not always practicable to scan thousands and thousands of lines of code, just in case - and you'd hope that it's not necessary - but that doesn't mean it's not a good idea.

Using Composer with Drupal 8 makes installing new modules as easy as using NPM, and exposes the same problems to some extent. Add in a build pipeline, and it's very easy to never even see a single line of the new code that you've added to your project. Am I poking a paranoia nerve, yet? ;)

For further fun, think back to other attacks in the last year where sources for external JS dependencies were poisoned, resulting in compromised sites that didn't have a single shred of compromised code committed - it was all in the browser. How's THAT for scary!

In short, you are at risk if:
- You install a module without checking every line of code
- You update a module without checking every line of code / the diff
- You use a DEV release of a module
- You use composer
- Your application pulls in external dependencies

These actions, these ways of working all create dark corners in which evil code can lie undetected.

The light shall save you

Fortunately, it can easily be argued that Drupal Core is pretty safe from these sorts of issues. Phew. Thanks to the wide community of people contributing and keeping keen eyes on watch, Core code can be considered as well-protected. Under constant scrutiny, there's little that can go wrong. The light keeps the dark corners away.

Contrib land, however, is a little different. The most popular modules not only have maintainers (well done, guys!), but many supporting developers and regular release cycles and even official 'Security Coverage' status. We have brought light and trust to the contrib world, and that's a really important thing.

But what does 'Security Coverage' really provide? Can it fail? What happens if there is a malicious maintainer? I wonder.

When the light goes out

Many modules are starting to see the sun set. As dust gathers on old Drupal 7 modules and abandoned D8 alpha modules, the dark corners will start to appear. 'Security Coverage' status will eventually be dropped, or simply forgotten about, and issue lists will pile up. Away from the safety of strong community, keen eyes and dedicated maintainers, what used to be the pride of the Drupal community will one day become a relic. We must take care to keep pride in our heritage, and not allow it to become a source of danger.

Should a Drupal module maintainer be caught out by a trickster and have their work hacked, what would actually happen? Well, for most old D7 modules we'd probably see a few thousand sites pull in the code without looking, and it would likely take some time for the vulnerability to be noticed, let alone fixed.

Fortunately, most developers need a good reason to upgrade modules, so they won't just pull in a new malicious release straight away. But there's always a way, right? What if the hacker nicely bundled all those issues in the queue into a nice release? Or simply committed some new work to the DEV branch to see who would pull it in? There are loads of old modules still running on dev without an official release. How many of us have used them without pinning to a specific commit?

Vigilance is my middle name!

I have tried to ask a lot of questions, rather than simply doom-mongering. There's not an obvious resolution to all of these questions, and that's OK. Many may argue that, since Drupal has never had an issue like this before, we must already have sufficient measures in place to prevent such a thing happening - and I disagree. As the toolkit used by the world's hackers gets ever larger and ever more complex, we cannot afford to be lax in our perspective on security. We must be vigilant!

Module maintainers, remain vigilant. Ask good questions of new co-maintainers. Check their history. See what they've contributed. Find out who they really are.

Developers, remain vigilant. Know your cattle. Be familiar with what goes in and out of your code. Know where it comes from. Know who wrote it.

Drupalers, ask questions. How can we help module maintainers make good decisions? How can we support good developers and keep out the bad?

Some security tips!
- Always know what code you're adding to your project and whether you choose to trust it
- Drupal projects not covered by the Security Team should be carefully reviewed before use
- Know what changes are being made when performing module updates and upgrades
- If using a DEV version of a module in combination with a build process, always pin to a specific git commit (rather than HEAD), so that you don't pull in new code unknowingly

Blair Wadman: How to customise content in partial Twig templates

3 days 23 hours ago

Using partial Twig templates is a great way to organise your frontend code because you can reuse code fragments in multiple templates. But what happens if you want to use the same partial template in multiple places and customise its content slightly?

You can do this by passing variables to the included partial template using the with keyword.

Kanopi Studios: Kanopi Studios is a Top Provider on Clutch

4 days 10 hours ago

 

It’s not easy to find a development partner you can trust. Particularly if you’ve never been immersed in the world of web development, it may take you some time to learn the language. That can make it even more difficult to know whether your partner is really staying on track with what you want to accomplish.

Luckily, knowing what to look for in a business partner can save you from all of the potential troubles later on. Ratings and reviews sites like Clutch can help you get there. This platform focuses on collecting and verifying detailed client feedback, and then using a proprietary research algorithm to rank thousands of firms across their platform. Ultimately, Clutch is a resource for business buyers to find the top-ranked service providers that match their business needs.

Luckily for us, users on Clutch will also find Kanopi Studios at the top of the list to do just that. Kanopi has been working with Clutch for a few months to collect and utilize client feedback to find out what we should focus on in the coming year. Through the process, we’ve coincidentally been named among the firm’s top digital design agencies in San Francisco.

Here are some of the leading client reviews that led us to this recognition:

“They were fantastic overall. We had great success communicating to their team via video conferencing, and they were able to answer every question we had. They also worked quickly and were very efficient with their time, so we got a great value overall.”

“Kanopi Studios’ staff members are their most impressive assets — extremely intelligent, experienced, and personable. Building a website is never easy, but working with people you both respect and like makes a huge difference.”

“Kanopi Studios successfully migrated our Drupal platform while preserving all the content that we’ve built up over the years. They worked hard to achieve a responsive design that works well on both mobile and large desktop displays.”

Not only have these kind words earned us recognition on Clutch, but we’ve also gained the attention of the how-to focused platform, The Manifest (where we are listed among top Drupal developers in San Francisco), and the portfolio-focused site, Visual Objects (where we are gaining ground among top web design agencies site-wide).

Thank you, as always, to our amazing clients for the reviews and the support.

The post Kanopi Studios is a Top Provider on Clutch appeared first on Kanopi Studios.

Acro Media: Drupal 8 Commerce Performance Tuning

4 days 12 hours ago

Here are some performance tuning tips and instructions for setting up a very performant Drupal 8 Commerce site using Varnish, Redis, Nginx and MySQL. I’ve got this setup running nicely for at least 13,000 concurrent users and it should scale well past that.

FYI, We also have a Drupal 7 Commerce Performance Tuning guide here.

Varnish Config

You’ll need some specific config for Drupal as well as some extra config to work nicely with BigPipe caching. These are standard for Varnish and Drupal and not specific to Commerce.

Drupal

You’ll want to setup the Purge and Varnish Purge modules to handle tag based cache invalidation, nothing here is unique to Commerce, so you can follow the standard instructions. You will, however, want to make sure your pages actually are cached, as often modules or small misconfigurations can make a page not cacheable. To work nicely with Varnish, you want the entire page to be cacheable so your webserver doesn’t even get hit. An underused module that I find very helpful is Renderviz, which will show you a 3D breakdown of what cache tags are attached to what parts and can help you identify problem parts. I run

renderviz(‘max-age’, ‘0’) to show me anything that can’t be cached. Usually the parts you find can be corrected and made cacheable.

For example: In a recent set of performance testing I was doing, I found a newsletter signup that appeared on the bottom of every page had an overly aggressive honeypot setting, which rendered the page uncacheable. Changing the settings to only apply to necessary forms, as well as correcting a language selector, turned tons of uncached pages into cacheable pages. Now these pages return <10ms and put zero load on my web servers or database.

Web Servers PHP

Use the most modern version of PHP you can, preferably the latest stable. Never ever ever use PHP 5 which is terrible, terrible, terrible. Otherwise, make sure you have sufficient memory and allowed threads, and that will cover most of your PHP tuning. This is almost certainly the most resource heavy part of your Drupal stack, but it is also easy to scan horizontally, pretty much indefinitely. Also, the more you can make use of Varnish, the less this will get used.

Nginx/Apache

Most of this is just making sure you can handle the number of connections. You may need to up the file limit...

ulimit -n

...of your web user to allow for more than 1024 connections per nginx instance.

Database

A Commerce site is usually more write-heavy than your standard site, as your users create lots of "content" (aka carts and orders). This will usually change your MySQL config a bit, although the majority of your queries will still be reads. A pretty simple way to tune your site is to run...

mysqltuner

...against it after getting some real traffic data for at least a couple days, or simulating high traffic. It’s recommendations will get you a pretty good setup.

There is one other VERY important thing you need to do, you need to change your transaction isolation level from READ-REPEATABLE to READ-COMMITTED. READ-REPEATABLE is much too aggressive at table locking to work with most Drupal sites, especially anything write heavy. You will suffer from constant deadlocks even at fairly low traffic levels without this. Frankly, I think this should be a flag in the status page, but my patch hasn’t gotten any traction.

Cache Server

Nothing special here, but you are going to want use a separate caching option. It could be Memcache, Redis or even just a separate MySQL database. Redis is nice and fast, but the biggest gain is just splitting your cache away from the rest of your db so you can scale them easier.

Patches

There are a few specific patches that will be a great help to your performance.

_list cache tag invalidation

See: https://www.drupal.org/project/drupal/issues/2966607

Every entity type has an entity_type_list cache tag, which gets invalidated any time an entity of that type is added or changed and that those lists will need to get rebuild. This happens a LOT, but is a relatively simple query.

update cachetags set invalidation=invalidation+1 where tag=’my_entity_list’

This is an update, which is a blocking query, nothing else can edit this row while this query is running, which wouldn’t be so bad except...

This query often gets run as a part of larger tasks, in our case, such as when placing an order. A big task like this is run in a transaction, which basically means we save up all the queries and run them at once so they can be rolled back if something goes wrong. This means though, that this row stays locked for the whole duration of the transaction, not just the short time it takes this little query to run. If this invalidation happens near the start of the transaction, it can take a query that would talk 0.002 seconds and make it take 0.500 seconds, for example. Now, if we have more than 2 of these happening a second, we start to back up and build a queue of these queries, which just keeps getting longer and longer until we just start returning timeouts. Since this query is part of the bigger order transaction, it stops the whole order from being processed and can bring your checkout flow to a halt. 

Thankfully, the above listed patch allows these cache invalidations to be deferred so as to not block large transactions. I think the update query for invalidating cache tags is still a bottleneck as you could eventually reach it without these long transactions, but at this point that problem is more hypothetical than something you will practically encounter.

Add index to profiles

See: https://www.drupal.org/project/profile/issues/3017788

As you start getting more and more customers and orders, you will get more profiles. Loading them, especially for anonymous users, will really start to slow down and become a bottleneck. The listed patch simply adds an index to prevent that. Please note, this is a patch for the Profile module, not Commerce itself.

Make language switcher block cacheable

See: https://www.drupal.org/project/drupal/issues/2232375

This issue is unfortunately on hold pending some large core changes, but once it does land, this will allow the language switcher block to be used without worry of it blocking full page caching.

Conclusion

You should be able to scale well above 10,000 concurrent users with these tips. If you encounter any other bottlenecks or bugs, I’d love to hear about them. If you want help with some performance improvements from Acro Media and yours truly, feel free to contact us.

Acro Media: Drupal 7 Commerce Performance Tuning

4 days 12 hours ago

When it comes to ecommerce, a fast site can make a big difference in overall sales. I recently went through an exercise to tune a Drupal 7 Commerce site for high traffic on a Black Friday sales promotion. In previous years, the site would die in the beginning of the promotion, which really put a damper on the sale! I really enjoyed this exercise, finding all the issues in Commerce and Drupal that caused the site to perform sub-optimally.

FYI, We also have a Drupal 8 Commerce Performance Tuning guide here.

Scenario

In our baseline, for this specific site the response time was 25 seconds and we were able to handle only about 1000 orders an hour. With a very heavy percentage of 500s, timeouts and general unresponsiveness. CPU and memory utilization on web and database servers was very high.

Fast-forward to the end of all the tuning and we were able to handle 12K-15K orders an hour! The load generator couldn’t generate any more load, or the internet bandwidth on the load generators would get saturated, or something external to the Drupal environment became the limiter. At this point, we stopped trying to tune things. Horizontal capacity by adding additional webheads was linear. If we had added more webheads, they could handle the traffic. The database server wasn’t deadlocking. Its CPU and memory was very stable. CPU on the web servers would peak out at ~80% utilization, then more capacity would get added by spinning up a new server. The entire time, response time hovered around 500-600ms.

Enough about the scenario. Let’s dive into things.

Getting Started

The first step in tuning a site for a high volume of users and orders is to build a script that will create synthetic users and populate and submit the form(s) to add item(s) to the cart, register new users, input the shipping address and any other payment details. There’s a couple options to do this. JMeter is very popular. I’ve used it in the past with pretty decent success. In my most recent scenario, I used locust.io because it was recommended as a good tool. I hadn’t used it before and gave it a try. It worked well. And there are other load testing tools available too.

OK, now you are generating load on the site. Now start tuning the site. I used New Relic's APM monitoring to flag transactions and PHP methods that were red flags. Transactions that take a long time or happen with great frequency are all good candidates for red flags. If you don’t have access to New Relic, another option is Blackfire. Regardless what you use for identifying slow transactions, use something.

Make sure that there’s nothing crazy going on. In my case, there was a really bad performing query that was called in the theme’s template.php and it was getting loaded on every single page call. Even when it wasn’t needed. Tuning that query gave use an instant speed-up in performance.

After that, we starting digging into things. There are several core and contrib patches I’ll mention and explain why and when you should consider applying them on your site.

In your specific commerce site, things might be different. You might have different payment gateways or external integration points. But the process of identifying pain points it the same. Run a 30-60 minute load test and find long running PHP functions. Then fix them so it doesn’t take as long.

As a first step, install the Memcache (or Redis) module and set it up for locking. Without that one step, you’ll almost immediately run into deadlocks on the DB for the semaphore table. This is a critical first step. From my experience, deadlocks are the number one issue when running a site under load. And deadlocks on the semaphore table is probably the most common scenario. Do yourself a favor. Install Memcache and avoid the problem entirely.

Then see if you can disable form caching on checkout and user registration. This helped save a TON of traffic against the database for forms that really don’t need to be cached. More about that later in specific findings.

One last thing before diving into some findings...

SHOW ENGINE INNODB STATUS

...will become your favorite friend. Use it to find deadlocks on your MySQL server.

Specific Findings The following section describes specific problems and links to issues and patches related to the problems.
  • Do not attempt field storage write when field content did not change
    Commerce and Rules run and reprocess an order a lot. And then blindly save the results. If nothing has changed, why re-save everything again? So don’t. Apply this patch and see fewer deadlocks on order saves.
  • field_sql_storage_field_storage_load does use an unnecessary sort in the DB leading to a filesort
    Many times it makes sense to use your database to process the query. Until it doesn’t make sense. This is a case it leads to a filesort in MySQL (which you can discover using EXPLAIN in MySQL) and locking of tables and deadlocks. It is not that hard to do the sort in PHP. So do it.
  • Do not make entries in "cache_form" when viewing forms that use #ajax['callback'] (Drupal 7 port)
    This is a huge win, if you can pull it off. For transient form processing like login and checkout, disabling form cache is a huge relief to the DB. You might need to put the entire cart checkout onto a single page. No cart wizard. But the gains are pretty amazing.
  • If you are using captcha or anything with ajax on it on the login page, then you’ll need to make sure you are running the latest versions of Captcha and Recaptcha. See issues #2449209 and #2219993. Also, side note: if using the timing feature of recaptcha, the page this form falls on will not be cacheable and tends to bust page cache for important pages (like homepages that have a newsletter sign up form).
  • form_get_cache called when no_cache enabled
    You’ve done all that work to cut down on what is stored in cache. Great. But Drupal still wants to retrieve from cache. Let’s fix that. Cut down more DB calls.
  • commerce_payment_pane_checkout_form uses form_state values instead of input
    If your webshop is like most webshops, it is there to generate revenue. If you disable form caching on checkout, without this patch the values in your payment (including the ones for receiving payment) aren’t captured. Oops. Let’s fix that too.
  • Variable set stampede for js and css during asset building
    If you are using any auto scaling system and building out new servers when the site is under heavy load, you might already be using Advagg. But if you aren’t and are still using Drupal core’s asset system, spinning up a new system or two will cause some issues. Deadlocks galore when generating the CSS and JS aggregates. So either install Advagg or this patch.
  • Reduce database load by adding order_number during load
    Commerce and Rules really like to reprocess orders. An easy win is to reduce the number of one-off resaves and assign the order number after the first load.
  • Never use aggregation in maintenance mode
    While the site is under heavy load, the database sometimes becomes unreachable. Drupal treats this as maintenance mode. And tries to aggregate the JS/CSS and talk to the database. But the database isn’t reachable. It is a little ridiculous to aggregate JS/CSS on the maintenance page. And even more to try to talk to the database. So cut out that nonsense.
  • drupal_goto short circuits and doesn't set things to cache
    If you have any PHP classes you are using during the checkout, Drupal’s classloader auto loads them into memory. It then keeps track of where the files exist on the disk and this makes the next load of those classes just that much faster. Well, drupal_goto kills all this caching. And drupal_goto gets called when navigating through checkout.
Recap

Wow! That was a long list of performance enhancements. Here’s a quick recap though. Identify critical flow of your application. Generate load on that flow. Use a profiler to find pain points in that process. Then start picking things off, looking on drupal.org for existing issues, filing bugs, applying patches. Many of the identified issues discussed here will apply to your site. Others won’t apply and you’ll have different issues.

Surprisingly, or maybe not surprisingly, the biggest wins in our discovery process were the low hanging fruit, not the complex changes. That query in the template.php was killing the site. After that, switching to use Memcache for the semaphore table and eliminating form cache for orders also cut down on a lot of chatter with the database.

I hope you too can tune that Drupal 7 Commerce site to be able to handle thousands of orders an hour. The potential exists in the platform, it is just a matter of giving performance bottlenecks a little attention and fine tuning for your particular use case. Of course, if you need a little help we'd be happy to assist. A little bit of time spent can have you reaping the rewards from then on.

Checked
8 hours 6 minutes ago
Drupal.org - aggregated feeds in category Planet Drupal
Subscribe to Drupal Planet feed