Amazee Labs: Recent Drupal Security Updates

12 hours 41 minutes ago
Recent Drupal Security Updates Drupal is all about security  

The Drupal community is unique in many ways, and the Drupal Security Team is an example of this. They provide documentation about writing secure code and keeping your site secure. They work with the drupal.org infrastructure team and the maintainers of contributed modules, to look into and resolve security issues that have been reported.

Felix Morgan Thu, 05/24/2018 - 22:33

When a security issue is reported, the Drupal Security Team mobilizes to investigate, understand, and resolve it as soon as possible. They use a Coordinated Disclosure policy, which means that all issues are kept private until a patch can be created and released. Public announcements are only made when the issue has a solution and a secure version is available to everyone. This communication is sent out through all of the channels possible so that everyone is made aware of what they need to do to keep their sites safe and secure.

This means that everyone finds out about the patches, and therefore the vulnerabilities, at the same time. This includes people who want to keep their sites secure, as well as those who want to exploit vulnerabilities. Security updates become a matter of speed, and the development teams at Amazee Labs, along with our hosting partner amazee.io, are always ready to make sure patches are implemented as quickly as possible.

Recent Drupal Security Releases

On March 28th 2018, the Drupal Security Team released SA-CORE-2018-002. This patch was a critical security vulnerability that needed to be implemented on every Drupal site in the world as quickly as possible. At the time of the patch release there were no publically known exploits or attacks using the vulnerability, which was present on Drupal versions 6.x, 7.x & 8.x and was caused by inadequate input sanitization on Form API (FAPI) AJAX requests.

On April 25th, 2018 SA-CORE-2018-004 was released as a follow up patch. This release fixed a remote code execution (RCE) bug that would affect any site with Drupal versions 7.x or 8.x. The vulnerability was critical, and both issues resulted from problems with how Drupal handles a “#” character in URLs.

What are the dangers?

There are a number of different kinds of attacks that could take advantage of vulnerabilities fixed in the recent security updates. One kind of attack that is becoming more common is the installation of cryptocurrency mining software. These attacks are both subtle and resilient and use the CPU of the site server to generate cryptocurrency for the attacker.

Amazee Labs is keeping your sites safe

The Amazee Labs team takes these security releases seriously and works quickly to prepare for these updates. We inform our clients as soon as possible about the upcoming release and organize the maintenance and development teams to be ready to run the updates at the time of the release. During these “patch parties” our global teams work together to solve problems and secure all sites by leveraging everyone’s expertise all at once.

Implementing these measures takes development time not alloted in our usual maintenance budgets. We will always let you know when additional work is needed, and keep the communication channels open to address any concerns.

An additional layer of security is provided to our clients who host with our partner amazee.io. As soon as the security patch is released, the amazee.io team work to put an infrastructure level mitigation in place. This means that all Drupal sites that they host are immediately secured against initial attacks. You can read a detailed breakdown of how they accomplished this here.

Acro Media: Installing Drupal Commerce 2 Using Lando

18 hours 29 minutes ago

In this video, Josh Miller shows you how to install Drupal Commerce 2 using a local development tool called Lando. Further instructions are included below the video.

Timestamps:

  1. Commerce Kickstart download: 0:51
  2. “composer install” command: 8:00
  3. “lando init” command: 12:56
  4. “lando start” command: 15:06
  5. “Drupal install” screen: 17:04
  6. “lando stop” command: 21:18

Prerequisites:

  1. Download and install Composer
  2. Download and install Lando

Code generated during this video:

https://github.com/AcroMedia/install-commerce-lando 

Installing Drupal Commerce 2 locally using Commerce Kickstart, Composer, and Lando

Getting Drupal up and running on your computer is an important first step as an evaluator. Good news is that there’s a lot of tech that makes this easier than ever before. We’re going to walk you through how to install Commerce 2 using the Kickstart resource, Composer, and Lando. 

  1. Download and install Composer
  2. Download and install Lando
  3. Next go to Commerce Kickstart to create and download your customized composer.json file





  4. Run ‘composer install’



  5. Run ‘lando init’



  6. Run ‘lando start’



  7. Visit your local URL and install Drupal



  8. Start building!

What is Drupal Commerce

Drupal Commerce is an ecommerce focused subset of tools and community based on the open source content management system called Drupal. Drupal Commerce gives you the ability to sell just about anything to anyone using a myriad of open source technologies and leveraging hundreds of Drupal modules built to make that thing you need do that thing you want.

We use Commerce Kickstart to get things started.

What is Composer

Composer is the PHP dependency manager that can not only build and bring in Drupal, Drupal Commerce, and Symfony, but is the technology behind the newest Drupal Commerce Kickstart distribution. We leverage the composer.json file that commercekickstart.com gives us to bring in all of the Drupal code necessary to run a Drupal Commerce website.

To get started, we run “composer install” and that command brings in all the requirements for our project.

What is Docker

Docker is a virtualization software that brings together App services like Apache, Nginx, MySQL, Solr, Memcache, and many other technologies so that it can run on your own computer. This installation video uses a tool that runs on top of Docker in an abstract, and frankly easier, way.

If you want to learn more about Docker and the many different types of tools that run on top of it, we recommend John Kennedy’s 2018 Drupalcon presentation about Docker.

Another great resource that compares using Docker tools is Michael Anello’s take on the various technologies. 

What is Lando

Lando is a thin abstraction layer of tools on top of Docker that makes creating an environment as easy as “lando init” followed by “lando start.” Lando keeps the often confusing devops work of creating a local virtual environment to a few very well documented variable settings that it turns into full docker-compose scripts that Docker, in turn, uses to create a local environment where everything just works together. We’re very excited to see how Lando and Drupal Commerce start to work together.

Flocon de toile | Freelance Drupal: Switch from Google Maps to Leaflet and OpenStreetMap with Geolocation on Drupal 8

23 hours 10 minutes ago

May 2, 2018 Google has announced a major policy change regarding the use of its online services, including its popular mapping service Google Maps and all its associated APIs, to embed or generate location-based information. This policy change now pays for a service that was previously available for free under some relatively generous quota limits starting June 11, 2018. Please read this post for full details on this policy change and its implications.

Drupal.org blog: Drupal.org's GDPR compliance statement

1 day 14 hours ago

Our global community includes many EU citizens and residents of the EEA, and we have taken steps to comply with the GDPR which takes effect on May 25, 2018.

Your rights under this law and how Drupal.org complies with GDPR

We've updated our Terms of Service, Privacy Policy, Git Contributor Agreement, and Digital Advertising Policy based on the requirements of the EU General Data Protection Regulation. We've also begun a campaign to reconfirm your consent to our marketing messages.

For easy and clear access to the changes: 

Human Readable Summary

Disclaimer: This summary is not itself a part of the Terms of Service, Privacy Policy, Git Contributor Agreement, or Digital Advertising Policy, and is not a legal document. It is simply a handy reference for understanding privacy rights and regulations. Think of it as the user-friendly interface to the legal language.

In plain language, regulations such as GDPR define the following roles, rights, and responsibilities:

  • Data Subject - this is you, the end user.
  • Data Controller - this is us, the Drupal Association as the owners and operators of Drupal.org and its sub-sites.
  • Data Processor - any other organization that processes personal data on behalf of the Data Controller.
Rights of the Data Subject
  • Right to be Informed - A data subject has the right to know whether personal information is being processed; where; and for what purpose.
     
  • Right to Access - A data subject has a right to access the information about them that is stored by the Data Controller.
     
  • Right to Rectification - A data subject has the right to correct any errors in the data about them. This can be done by editing your user account, or contacting the Drupal Association directly.
     
  • Right to Restrict Processing - A data subject has the right to request that data not be processed, and yet also not be deleted by the Data Controller.
     
  • Right to Object - A data subject has the right to opt out of marketing, processing based on legitimate interest, or processing for research or statistical purposes.
     
  • Right to be Forgotten - Also known as the right to revoke consent, the right to be forgotten states that a data subject has the right to request erasure of data, the cessation of processing by the controller, and halting processing of the data by third party processors.

    The conditions for this, as outlined in article 17, include the data no longer being relevant to original purposes for processing, or a data subjects withdrawing consent.

    It should also be noted that this right requires controllers to compare the subjects' rights to "the public interest in the availability of the data" when considering such requests.

  • Data Portability - A data subject has the right to receive a copy of their data in a 'commonly used and machine readable format.'

    This information is outlined in the sections below titled "Your Choices About Use and Disclosure of Your Information" and "Accessing and Correcting Your Information".

Responsibilities of the Data Controller and Data Processors
  • Privacy by Design - 'The controller shall..implement appropriate technical and organisational measures..in an effective way.. in order to meet the requirements of this Regulation and protect the rights of data subjects'. Article 23 of the GDPR calls for controllers to hold and process only the data absolutely necessary for the completion of its duties, as well as limit the access to personal data to those who need it to carry out these duties.
     
  • Breach Notification - The Data Controller must notify the appropriate data processing authority and any affected end user of any breach that might result in 'risk to the rights and freedoms of individuals' within 72 hours of becoming aware of the breach.

    A Data Processor must notify the Data Controller of any breach 'without undue delay.'

  • Data protection officer - A Data Controller or Processor must appoint a Data Protection Officer when: a Data Controller represents a public authority; or the core operations of the Controller require regular and systematic monitoring of Subjects on a large scale; or when the Controller's core operations depend on processing a large scale of special categories of data (including but not limited to health data, criminal conviction information, etc).
     

    The Drupal Association's core operations do not require the Association to establish a Data Protection Officer.

We take privacy and security very seriously, as all Drupal professionals do! We will continue analyzing the legal landscape and collecting feedback for future revisions.

If you have any questions or concerns about our GDPR compliance, or if you want to point out a mistake or provide a suggestion for the Terms of Service, Privacy Policy, Git Contributor Agreement, or Digital Advertising policy, you can send an email to help@drupal.org.

Lullabot: Decoupled Drupal Hard Problems: Schemas

1 day 17 hours ago

The Schemata module is our best approach so far in order to provide schemas for our API resources. Unfortunately, this solution is often not good enough. That is because the serialization component in Drupal is so flexible that we can’t anticipate the final form our API responses will take, meaning the schema that our consumers depend on might be inaccurate. How can we improve this situation?

This article is part of the Decoupled hard problems series. In past articles, we talked about request aggregation solutions for performance reasons, and how to leverage image styles in decoupled architectures.

TL;DR
  • Schemas are key for an API's self-generated documentation
  • Schemas are key for the maintainability of the consumer’s data model.
  • Schemas are generated from Typed Data definitions using the Schemata module. They are expressed in the JSON Schema format.
  • Schemas are statically generated but normalizers are determined at runtime.
Why Do We Need Schemas?

A database schema is a description of the data a particular table can hold. Similarly, an API resource schema is a description of the data a particular resource can hold. In other words, a schema describes the shape of a resource and the datatype of each particular property.

Consumers of data need schemas in order to set their expectations. For instance, the schema tells the consumer that the body property is a JSON object that contains a value that is a string. A schema also tells us that the mail property in the user resource is a string in the e-mail format. This knowledge empowers consumers to add client-side form validation for the mail property. In general, a schema will help consumers to have a prior understanding of the data they will be fetching from the API, and what data objects they can write to the API.

We are using the resource schemas in the Docson and Open API to generate automatic documentation. When we enable JSON API and  Open API you get a fully functional and accurately documented HTTP API for your data model. Whenever we make changes to a content type, that will be reflected in the HTTP API and the documentation automatically. All thanks to the schemas.

A consumer could fetch the schemas for all the resources it needs at compile time or fetch them once and cache them for a long time. With that information, the consumer can generate its models automatically without developer intervention. That means that with a single implementation once, all of our consumers’ models are done forever. Probably, there is a library for our consumer’s framework that does this already.

More interestingly, since our schema comes with type information our schemas can be type safe. That is important to many languages like Swift, Java, TypeScript, Flow, Elm, etc. Moreover, if the model in the consumer is auto-generated from the schema (one model per resource) then minor updates to the resource are automatically reflected in the model. We can start to use the new model properties in Angular, iOS, Android, etc.

In summary, having schemas for our resources is a huge improvement for the developer experience. This is because they provide auto-generated documentation of the API and auto-generated models for the consumer application.

How Are We Generating Schemas In Drupal?

One of Drupal 8's API improvements was the introduction of the Typed Data API. We use this API to declare the data types for a particular content structure. For instance, there is a data type for a Timestamp that extends an Integer. The Entity and Field APIs combine these into more complex structures, like a Node.

JSON API and REST in core can expose entity types as resources out of the box. When these modules expose an entity type they do it based on typed data and field API. Since the process to expose entities is known, we can anticipate schemas for those resources.

In fact, assuming resources are a serialization of field API and typed data is the only thing we can do. The base for JSON API and REST in core is Symfony's serialization component. This component is broken into normalizers, as explained in my previous series. These normalizers transform Drupal's inner data structures into other simpler structures. After this transformation, all knowledge of the data type, or structure is lost. This happens because the normalizer classes do not return the new types and new shapes the typed data has been transformed into. This loss of information is where the big problem lies with the current state of schemas.

The Schemata module provides schemas for JSON API and core REST. It does it by serializing the entity and typed data. It is only able to do this because it knows about the implementation details of these two modules. It knows that the nid property is an integer and it has to be nested under data.attributes in JSON API, but not for core REST. If we were to support another format in Schemata we would need to add an ad-hoc implementation for it.

The big problem is that schemas are static information. That means that they can't change during the execution of the program. However, the serialization process (which transforms the Drupal entities into JSON objects) is a runtime operation. It is possible to write a normalizer that turns the number four into 4 or "four" depending if the date of execution ends in an even minute or not. Even though this example is bizarre, it shows that determining the schema upfront without other considerations can lead to errors. Unfortunately, we can’t assume anything about the data after its serialized.

We can either make normalization less flexible—forcing data types to stay true to the pre-generated schemas—or we can allow the schemas to change during runtime. The second option clearly defeats the purpose of setting expectations, because it would allow a resource to potentially differ from the original data type specified by the schema.

The GraphQL community is opinionated on this and drives the web service from their schema. Thus, they ensure that the web service and schema are always in sync.

How Do We Go Forward From Here

Happily, we are already trying to come up with a better way to normalize our data and infer the schema transformations along the way. Nevertheless, whenever a normalizer is injected by a third party contrib module or because of improved normalizations with backward compatibility the Schemata module cannot anticipate it. Schemata will potentially provide the wrong schema in those scenarios. If we are to base the consumer models on our schemas, then they need to be reliable. At the moment they are reliable in JSON API, but only at the cost of losing flexibility with third-party normalizers.

One of the attempts to support data transformations and the impact they have on the schemas are Field Enhancers in JSON API Extras. They represent simple transformations via plugins. Each plugin defines how the data is transformed, and how the schema is affected. This happens in both directions, when the data goes out and when the consumers write back to the API and the transformation needs to be reversed. Whenever we need a custom transformation for a field, we can write a field enhancer instead of a normalizer. That way schemas will remain correct even if the data change implies a change in the schema.

undefined

We are very close to being able to validate responses in JSON API against schemas when Schemata is present. It will only happen in development environments (where PHP’s asserts are enabled). Site owners will be able to validate that schemas are correct for their site, with all their custom normalizers. That way, when a site owner builds an API or makes changes they'll be able to validate the normalized resource against the purported schema. If there is any misalignment, a log message will be recorded.

Ideally, we want the certainty that schemas are correct all the time. While the community agrees on the best solution, we have these intermediate measures to have reasonable certainty that your schemas are in sync with your responses.

Join the discussion in the #contenta Slack channel or come to the next API-First Meeting and show your interest there!

Note: This article was originally published on November 3, 2017. Following DrupalCon Nashville, we are republishing (with updates) some of our key articles on decoupled or "headless" Drupal as the community as a whole continues to explore this approach further. Comments from the original will appear unmodified.

Hero photo by Oliver Thomas Klein on Unsplash.

Chromatic: DrupalCon Nashville Recap

1 day 17 hours ago

It’s hard to believe DrupalCon Nashville was over a month ago! We have been busy here at Chromatic ever since, but we wanted to give a recap of the conference from our point of view.

Axelerant Blog: Women at Axelerant: Chapter Two

1 day 21 hours ago


I sat down to speak with the amazing women of Axelerant, and they each shared their unique perspectives about what it's like being professionals in their field. In this chapter, Mridulla, Akanksha, Sabreena, and Nikita expound on this—and in their own words.

Gizra.com: Understanding Media Management with Drupal Core

2 days 4 hours ago

But I just want to upload images to my site…

There is a clear difference between what a user expects from a CMS when they try to upload an image, and what they get out of the box. This is something that we hear all the time, and yet we, as a Drupal community, struggle to do it right.

There are not simple answers on why Drupal has issues regarding media management. As technology evolves, newer and simpler tools raise the bar on what users expects to see on their apps. Take Instagram for example. An entire team of people (not just devs) are focused on making the experience as simple as possible.

Therefore it’s normal to expect that everyone wants to have this type of simplicity everywhere. However, implementing this solutions is not always trivial, as you will see.

Continue reading…

Virtuoso Performance: Configuring migrations via a form

2 days 6 hours ago
Configuring migrations via a form mikeryan Tuesday, May 22, 2018 - 09:29pm

Frequently, there may be parts of a migration configuration which shouldn’t be hard-coded into your YAML file - some configuration may need to be changed periodically, some may vary according to environment (for example, a dev environment may access a dev or test API endpoint, while prod needs to access a production endpoint), or you may need a password or other credentials to access a secure endpoint (or for a database source which you can’t put into settings.php). You may also need to upload a data file for input into your migration. If you are implementing your migrations as configuration entities (a feature provided by the migrate_plus module), all this is fairly straightforward - migration configuration entities may easily be loaded, modified, and saved based on form input, implemented in a standard form class.

Uploading data files

For this project, while other CSV source files were static enough to go into the migration module itself, we needed to periodically update the blog data during the development and launch process. A file upload field is set up in the normal way:

$form['acme_blog_file'] = [ '#type' => 'file', '#title' => $this->t('Blog data export file (CSV)'), '#description' => $this->t('Select an exported CSV file of blog data. Maximum file size is @size.', ['@size' => format_size(file_upload_max_size())]), ];

And saved to the public file directory in the normal way:

$all_files = $this->getRequest()->files->get('files', []); if (!empty($all_files['acme_blog_file'])) { $validators = ['file_validate_extensions' => ['csv']]; if ($file = file_save_upload('acme_blog_file', $validators, 'public://', 0)) {

So, once we’ve got the file in place, we need to point the migration at it. We load the blog migration, retrieve its source configuration, set the path to the uploaded file, and save it back to active configuration storage.

$blog_migration = Migration::load('blog'); $source = $blog_migration->get('source'); $source['path'] = $file->getFileUri(); $blog_migration->set('source', $source); $blog_migration->save(); drupal_set_message($this->t('File uploaded as @uri.', ['@uri' => $file->getFileUri()])); } else { drupal_set_message($this->t('File upload failed.')); } }

It’s important to understand that get() and set() only operate directly on top-level configuration keys - we can’t simply do something like $blog_migration->set(‘source.path’, $file->getFileUri()), so we need to retrieve the whole source configuration array, and set the whole array back on the entity.

Endpoints and credentials

The endpoint and credentials for our event service are configurable through the same webform. Note that we obtain the current values from the event migration configuration entity to prepopulate the form:

$event_migration = Migration::load('event'); $source = $event_migration->get('source'); if (!empty($source['urls'])) { if (is_array($source['urls'])) { $default_value = reset($source['urls']); } else { $default_value = $source['urls']; } } else { $default_value = 'http://services.example.com/CFService.asmx?wsdl'; } $form['acme_event'] = [ '#type' => 'details', '#title' => $this->t('Event migration'), '#open' => TRUE, ]; $form['acme_event']['event_endpoint'] = [ '#type' => 'textfield', '#title' => $this->t('CF service endpoint for retrieving event data'), '#default_value' => $default_value, ]; $form['acme_event']['event_clientid'] = [ '#type' => 'textfield', '#title' => $this->t('Client ID for the CF service'), '#default_value' => @$source['parameters']['clientId'] ?: 1234, ]; $form['acme_event']['event_password'] = [ '#type' => 'password', '#title' => $this->t('Password for the CF service'), '#default_value' => @$source['parameters']['clientCredential']['Password'] ?: '', ];

In submitForm(), we again load the migration configuration, insert the form values, and save:

$event_migration = Migration::load('event'); $source = $event_migration->get('source'); $source['urls'] = $form_state->getValue('event_endpoint'); $source['parameters'] = [ 'clientId' => $form_state->getValue('event_clientid'), 'clientCredential' => [ 'ClientID' => $form_state->getValue('event_clientid'), 'Password' => $form_state->getValue('event_password'), ], 'startDate' => date('m-d-Y'), ]; $event_migration->set('source', $source); $event_migration->save(); drupal_set_message($this->t('Event migration configuration saved.'));

Note that we also reset the startDate value while we’re at it (see the previous SOAP blog post).

Tags Drupal Planet Drupal Migration Use the Twitter thread below to comment on this post:

Configuring migrations via a form https://t.co/EZTiUKBazX

— Virtuoso Performance (@VirtPerformance) May 22, 2018

 

Kalamuna Blog: Drupalistas Spent Our Entire Swag Budget. Where did the Money Go?

2 days 11 hours ago
Drupalistas Spent Our Entire Swag Budget. Where did the Money Go? Shannon O'Malley Tue, 05/22/2018 - 15:09

This April at DrupalCon Nashville, in addition to wanting to meet colleagues and soak up the great talks, we wanted to create a forum for the international Drupal community to do good. That’s why we used our sponsor booth wall as a space for attendees to promote nonprofits that work for causes that matter to them.

Categories Articles Community Drupal Nonprofits Author Shannon O'Malley

Dries Buytaert: My thoughts on Adobe buying Magento for $1.68 billion

2 days 13 hours ago

Yesterday, Adobe announced that it agreed to buy Magento for $1.68 billion. When I woke up this morning, 14 different people had texted me asking for my thoughts on the acquisition.

Adobe acquiring Magento isn't a surprise. One of our industry's worst-kept secrets is that Adobe first tried to buy Hybris, but lost the deal to SAP; subsequently Adobe tried to buy DemandWare and lost out against Salesforce. It's evident that Adobe has been hungry to acquire a commerce platform for quite some time.

The product motivation behind the acquisition

Large platform companies like Salesforce, Oracle, SAP and Adobe are trying to own the digital customer experience market from top to bottom, which includes providing support for marketing, commerce, personalization, and data management, in addition to content and experience management and more.

Compared to the other platform companies, Adobe was missing commerce. With Magento under its belt, Adobe can better compete against Salesforce, Oracle and SAP.

While Salesforce, SAP and Oracle offer good commerce capability, they lack satisfactory content and experience management capabilities. I expect that Adobe closing the commerce gap will compel Salesforce, SAP and Oracle to act more aggressively on their own content and experience management gap.

While Magento has historically thrived in the SMB and mid-market, the company recently started to make inroads into the enterprise. Adobe will bring a lot of operational maturity; how to sell into the enterprise, how to provide enterprise grade support, etc. Magento stands to benefit from this expertise.

The potential financial outcome behind the acquisition

According to Adobe press statements, Magento has achieved "approximately $150 million in annual revenue". We also know that in early 2017, Magento raised $250 million in funding from Hillhouse Capital. Let's assume that $180 million of that is still in the bank. If we do a simple back-of-the-envelope calculation, we can subtract this $180 million from the $1.68 billion, and determine that Magento was valued at roughly $1.5 billion, or a 10x revenue multiple on Magento's trailing twelve months of revenue. That is an incredible multiple for Magento, which is primarily a licensing business today.

Compare that with Shopify, which is trading at a $15 billion dollar valuation and has $760 million of twelve month trailing revenue. This valuation is good for a 20x multiple. Shopify deserves the higher multiple, because it's the better business; all of its business is delivered in the cloud and at 65% year-over-year revenue growth, it is growing much faster than Magento.

Regardless, one could argue that Adobe got a great deal, especially if it can accelerate Magento's transformation from a licensing business into a cloud business.

Most organizations prefer best-of-breed

While both the product and financial motivations behind this acquisition are seemingly compelling, I'm not convinced organizations want an integrated approach.

Instead of being confined to proprietary vendors' prescriptive suites and roadmaps, global brands are looking for an open platform that allows organizations to easily integrate with their preferred technology. Organizations want to build content-rich shopping journeys that integrate their experience management solution of choice with their commerce platform of choice.

We see this first hand at Acquia. These integrations can span various commerce platforms, including IBM WebSphere Commerce, Salesforce Commerce Cloud/Demandware, Oracle/ATG, SAP/hybris, Magento and even custom transaction platforms. Check out Quicken (Magento), Weber (Demandware), Motorola (Broadleaf Commerce), Tesla (custom to order a car, and Shopify to order accessories) as great examples of Drupal and Acquia working with various commerce platforms. And of course, we've quite a few projects with Drupal's native commerce solution, Drupal Commerce.

Owning Magento gives Adobe a disadvantage, because commerce vendors will be less likely to integrate with Adobe Experience Manager moving forward.

It's all about innovation through integration

Today, there is an incredible amount of innovation taking place in the marketing technology landscape (full-size image), and it is impossible for a single vendor to have the most competitive product suite across all of these categories. The only way to keep up with this unfettered innovation is through integrations.

For reference, here are the 2011, 2012, 2014, 2015, 2016 and 2017 versions of the landscape. It shows how fast the landscape is growing.

Most customers want an open platform that allows for open innovation and unlimited integrations. It's why Drupal and Acquia are winning, why the work on Drupal's web services is so important, and why Acquia remains committed to a best-of-breed strategy for commerce. It's also why Acquia has strong conviction around Acquia Journey as a marketing integration platform. It's all about innovation through integration, making those integrations easy, and removing friction from adopting preferred technologies.

If you acquire a commerce platform, acquire a headless one

If I were Adobe, I would have looked to acquire a headless commerce platform such as Elastic Path, Commerce Tools, Moltin, Reaction Commerce or even Salsify.

Today, there is a lot of functional overlap between Magento and Adobe Experience Manager — from content editing, content workflows, page building, user management, search engine optimization, theming, and much more. The competing functionality between the two solutions makes for a poor developer experience and for a poor merchant experience.

In a headless approach, the front end and the back end are decoupled, which means the experience or presentation layer is separated from the commerce business layer. There is a lot less overlap of functionality in this approach, and it provides a better experience for merchants and developers.

Alternatively, you could go for a deeply integrated approach like Drupal Commerce. It has zero overlap between its commerce, content management and experience building capabilities.

For Open Source, it could be good or bad

How Adobe will embrace Magento's Open Source community is possibly the most intriguing part of this acquisition — at least for me.

For a long time, Magento operated as Open Source in name, but wasn't very Open Source in practice. Over the last couple of years, the Magento team worked hard to rekindle its Open Source community. I know this because I attended and keynoted one of its conferences on this topic. I have also spent a fair amount of time with Magento's leadership team discussing this. Like other projects, Magento has been taking inspiration from Drupal.

For example, the introduction of Magento 2 allowed the company to move to GitHub for the first time, which gave the community a better way to collaborate on code and other important issues. The latest release of Magento cited 194 contributions from the community. While that is great progress, it is small compared to Drupal.

My hope is that these Open Source efforts continue now that Magento is part of Adobe. If they do, that would be a tremendous win for Open Source.

On the other hand, if Adobe makes Magento cloud-only, radically changes their pricing model, limits integrations with Adobe competitors, or doesn't value the Open Source ethos, it could easily alienate the Magento community. In that case, Adobe bought Magento for its install base and the Magento brand, and not because it believes in the Open Source model.

This acquisition also signals a big win for PHP. Adobe now owns a $1.68 billion PHP product, and this helps validate PHP as an enterprise-grade technology.

Unfortunately, Adobe has a history of being "Open Source"-second and not "Open Source"-first. It acquired Day Software in July 2010. This technology was largely made using open source frameworks — Apache Sling, Apache Jackrabbit and more — and was positioned as an open, best-of-breed solution for developers and agile marketers. Most of that has been masked and buried over the years and Adobe's track record with developers has been mixed, at best.

Will the same happen to Magento? Time will tell.

Ashday's Digital Ecosystem and Development Tips: Should You Hire In-House Programmers as Employees or Outsource to a Consulting Firm?

2 days 18 hours ago

Well sure, ok, maybe we might be slightly biased on this. We are, as it turns out, a consulting firm in the business of selling outsourced programming. Ahem...

But, nonetheless, I’ll try to be reasonably fair and balanced here. As a consultancy I think we have, in fact, a unique vantage point on such matters, since we spend each day of our lives straddling both sides of this topic: That is, we sell outsourced programming to organizations for whom outsourcing is a good fit; whereas we ourselves hire in house staff programmers, i.e. we are an organization for whom outsourcing is not a good fit.

Web Wash: Create Individual Registration Forms using Multiple Registration in Drupal 8

2 days 19 hours ago

If a user needs to create an account on a Drupal site, they go to the user registration page at "/user/register". This page is the registration form on a Drupal site. You can customize it by adding or removing fields. But what if you want to have multiple registration pages?

Let's say you have two different roles on your Drupal site and you need a separate form for each role. How would you build that?

You could handle all of this writing custom code but remember we're using Drupal so means there's a module that can handle this type of functionality and It's called Multiple Registration.

The Multiple Registration module allows you to create individual registration forms base off a user role in Drupal. When you register on one of the forms, you're automatically assigned the configured role.

In this tutorial, you'll learn how to use Multiple Registration to create individual registration forms.

Web Omelette: How to render entity field widgets inside a custom form

3 days 2 hours ago

In an older article we looked at how to render an entity form programatically using custom form display modes. Fair enough. But did you ever need to combine this form with a few form elements of yours which do not have to be stored with the corresponding entity? In other words, you need to be working in a custom form…

What I used to do in this case was write my own clean form elements in a custom form and on submit, deal with saving them to the entity. This is not a big deal if I am dealing with simple form elements, and of course, only a few of them. If my form is big or complex, using multivalue fields, image uploads and stuff like this, it becomes quite a hassle. And who needs that?

Instead, in our custom form, we can load up and add the field widgets of our entity form. And I know what you are thinking: can we just build an entity form using the EntityFormBuilder service, as we saw in the previous article, and just copy over the form element definitions? Nope. That won’t work. Instead, we need to mimic what it does. So how do we do that?

We start by creating a nice form display in the UI where we can configure all our widgets (the ones we want to show and in the way we want them to show up). If the default form display is good enough, we don’t even need to create this. Then, inside the buildForm() method of our custom form we need to do a few things.

We create an empty entity of the type that concerns us (for example Node) and store that on the form state (for the submission handling that happens later):

$entity = $this->entityTypeManager->getStorage(‘node’)->create([ 'type' => ‘article’ ]); $form_state->set(‘node’, $node);

Next, we load our newly created form display and store that also on the form state:

/** @var \Drupal\Core\Entity\Display\EntityFormDisplayInterface $form_display */ $form_display = $this->entityTypeManager->getStorage('entity_form_display')->load('node.article.custom_form_display'); $form_state->set('form_display', $form_display);

You’ll notice that the form display is actually a configuration entity whose ID is made up of the concatenation of the entity type and bundle it’s used on, and its unique machine name.

Then, we loop over all the components of this form display (essentially the field widgets that we configure in the UI or inside the base field definitions) and build their widgets onto the form:

foreach ($form_display->getComponents() as $name => $component) { $widget = $form_display->getRenderer($name); if (!$widget) { continue; } $items = $entity->get($name); $items->filterEmptyItems(); $form[$name] = $widget->form($items, $form, $form_state); $form[$name]['#access'] = $items->access('edit'); }

This happens by loading the renderer for each widget type and asking it for its respective form elements. And in order for it to do this, it needs an instance of the FieldItemListInterface for that field (which at this stage is empty) in order to set any default values. This we just get from our entity.

And we also check the access on that field to make sure the current user can access it.

Finally, we need to also specify a #parents key on our form definition because that is something the widgets themselves expect. It can stay empty:

$form['#parents'] = [];

Now we can load our form in the browser and all the configured field widgets should show up nicely. And we can add our own complementary elements as we need. Let’s turn to the submit handler to see how we can easily extract the submitted values and populate the entity. It’s actually very simple:

/** @var \Drupal\Core\Entity\Display\EntityFormDisplayInterface $form_display */ $form_display = $form_state->get('form_display'); $entity = $form_state->get('entity'); $extracted = $form_display->extractFormValues($entity, $form, $form_state);

First, we get our hands on the same form display config and the entity object we passed on from the form definition. Then we use the former to “extract” the values that actually belong to the latter, from the form state, into the entity. The $extracted variable simply contains an array of field names which have been submitted and whose values have been added to the entity.

That’s it. We can continue processing our other values and save the entity: basically whatever we want. But we benefited from using the complex field widgets defined on the form display, in our custom form.

Ain't that grand?

Gizra.com: A Simple Decoupled Site with Drupal 8 and Elm

3 days 4 hours ago

This is going to be a simple exercise to create a decoupled site using Drupal 8 as the backend and an Elm app in the frontend. I pursue two goals with this:

  • Evaluate how easy it will be to use Drupal 8 to create a restful backend.
  • Show a little bit how to set up a simple project with Elm.

We will implement a very simple functionality. On the backend, just a feed of blog posts with no authentication. On the frontend, we will have a list of blog posts and a page to visualize each post.

Our first step will be the backend.

Before we start, you can find all the code I wrote for this post in this GitHub repository.

Drupal 8 Backend

For the backend, we will use Drupal 8 and the JSON API module to create the API that we will use to feed the frontend. The JSON API module follows the JSON API specification and currently can be found in a contrib project. But as has been announced in the last DrupalCon “Dries”-note, the goal is to move it to an experimental module in core in the Drupal 8.6.x release.

But even before that, we need to set up Drupal in a way that is easy to version and to deploy. For that, I have chosen to go with the Drupal Project composer template. This template has become one of the standards for site development with Drupal 8 and it is quite simple to set up. If Composer is already installed, then it is as easy as this:

composer create-project drupal-composer/drupal-project:8.x-dev server --stability dev --no-interaction

This will create a folder called server with our code structure for the backend. Inside this folder, we have now a web folder, where we have to point our webserver. And is also inside this folder where we have to put all of our custom code. For this case, we will try to keep the custom code as minimal as possible. Drupal Project also comes with the two best friends for Drupal 8 development: drush and drupal console. If you don’t know them, Google it to find more out about what they can do.

After installing our site, we need to install our first dependency, the JSON API module. Again, this is quite easy, inside the server folder, we run the next command:

composer require drupal/jsonapi:2.x

This will accomplish two things: it will download the module and it will add it to the composer files. If we are versioning our site on git, we will see that the module does not appear on the repo, as all vendors are excluded using the gitignore provided by default. But we will see that it has been added to the composer files. That is what we have to commit.

With the JSON API module downloaded, we can move back to our site and start with site building.

Configuring Our Backend

Let’s try to keep it as simple as possible. For now, we will use a single content type that we will call blog and it will contain as little configuration as possible. As we will not use Drupal to display the content, we do not have to worry about the display configuration. We will only have the title and the body fields on the content type, as Drupal already holds the creation and author fields.

By default, the JSON API module already generates the endpoints for the Drupal entities and that includes our newly created blog content type. We can check all the available resources: if we access the /jsonapi path, we will see all the endpoints. This path is configurable, but it defaults to jsonapi and we will leave it as is. So, with a clean installation, these are all the endpoints we can see:

JSON API default endpoints

But, for our little experiment, we do not need all those endpoints. I prefer to only expose what is necessary, no more and no less. The JSON API module provides zero configurable options on the UI out of the box, but there is a contrib module that allows us to customize our API. This module is JSON API Extras:

composer require drupal/jsonapi_extras:2.x

JSONAPI Extras offer us a lot of options, from disabling the endpoint to changing the path used to access it, or renaming the exposed fields or even the resource. Quite handy! After some tweaking, I disabled all the unnecessary resources and most of the fields from the blog content type, reducing it just to the few we will use:

JSONAPI blog resource

Feel free to play with the different options. You will see that you are able to leave the API exactly as you need.

Moving Our Configuration to Version Control

If you have experience with Drupal 7, you probably used the Features module to export configuration to code. But one of the biggest improvements of Drupal 8 is the Configuration Management Interface (CMI). This system provides a generic engine to export all configuration to YAML files. But even if this system works great, is still not the most intuitive or easy way to export the config. But using it as a base, there are now several options that expand the functionality of CMI and provide an improved developer experience. The two bigger players on this game are (Config Split)[https://www.drupal.org/project/config_split] and the good old (Features)[https://www.drupal.org/project/features].

Both options are great, but I decided to go with my old friend Features (maybe because I’m used to it’s UI). The first step, is to download the module:

composer require drupal/features:3.x

One of the really cool functionalities that the Drupal 8 version of the Features module brings is that can instantly create an installation profile with all our custom configuration. Just with a few clicks we have exported all the configuration we did in previous steps; but not only that, we have also created an installation profile that will allow us to replicate the site easily. You can read more of Features in the (official documentation on drupal.org)[https://www.drupal.org/docs/8/modules/features/building-a-distribution-with-features-3x].

Now, we have the basic functionality of the backend. There are some things we should still do, such as restricting the access to the backend interface, to prevent login or registration to the site, but we will not cover it in this post. Now we can move to the next step: the Elm frontend.

Sidenote

I used Features in this project to give it a try and play a bit. If you are trying to create a real project, you might want to consider other options. Even the creators of the Features module suggest not to use it for this kind of situations, as you can read here.

The Frontend

As mentioned, we will use Elm to write this app. If you do not know what it is, Elm is a pure functional language that compiles into Javascript and it is used to create reliable webapps.

Installing Elm is easy. You can build it from the source, but the easiest and recommended way is just use npm. So let’s do it:

npm install -g elm

Once we install Elm, we get four different commands:

  • elm-repl: an interactive Elm shell, that allows us to play with the language.
  • elm-reactor: an interactive development tool that automatically compiles our code and serves it on the browser.
  • elm-make: to compile our code and build the app we will upload to the server.
  • elm-package: the package manager to download or publish elm packages.

For this little project, we will mostly use elm-reactor to test our app. We can begin by starting the reactor and accessing it on the browser. Once we do that, we can start coding.

elm-reactor Elm Reactor Our First Elm Program

If you wish to make apple pie from scratch, you must create first the universe. Carl Sagan

We start creating a src folder that will contain all our Elm code and here, we start the reactor with elm reactor. If we go to our browser and access http://localhost:8000, we will see our empty folder. Time to create a Main.elm file in it. This file will be the root of our codebase and everything will grow from here. We can start with the simplest of all the Elm programs:

module Main exposing main import Html exposing (text) main = text "Hello world"

This might seem simple, but when we access the Main.elm file in the reactor, there will be some magic going on. The first thing we will notice, is that we now have a page working. It is simple, but it is an HTML page generated with Elm. But that’s not the only thing that happened. On the background, elm reactor noticed we imported a Html package, created a elm-packages.json file, added it as dependency and downloaded it.

This might be a good moment to do our first commit of our app. We do not want to include the vendor packages from elm, so we create a .gitignore file and add the elm-stuff folder there. Our first commit will include only three things, the Mail.elm file, the .gitignore and the elm-packages.json file.

The Elm Architecture

Elm is a language that follows a strict pattern, it is called (The Elm Architecture)[https://guide.elm-lang.org/architecture/]. We can summarize it in this three simple components:

  • Model, which represents the state of the application.
  • Update, how we update our application.
  • View, how we represent our state.

Given our small app, let’s try to represent our code with this pattern. Right now, our app is static and has no functionality at all, so there are not a lot of things to do. But, for example, we could start moving the text we show on the screen to the model. The view will be the content we have on our main function, and as our page has no functionality, the update will do nothing at this stage.

type alias Model = String model : Model model = "Hello world" view : Model -> Html Msg view model = text model main = view model

Now, for our blog, we need two different pages. The first one will be the listing of blog posts and the second one, a page for the individual post. To simplify, let’s keep the blog entries as just a string for now. Our model will evolve into a list of Posts. In our state, we also need to store which page we are located. Let’s create a variable to store that information and add it to our model:

type alias Model = { posts : List Post , activePage : Page } type alias Post = String type Page = List | Blog model : Model model = { posts = ["First blog", "Second blog"] , activePage = List }

And we need to update or view too:

view Model : Model -> Html Msg view model = div [] [ List.map viewPost model.posts ] viewPost : Post -> Html Msg viewPost post = div [] [ text post ]

We now have the possibility to create multiple pages! We can create our update function that will modify the model based on the different actions we do on the page. Right now, our only action will be navigating the app, so let’s start there:

type Msg = NavigateTo Page

And now, our update will update the activePage of our model, based on this message:

update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of NavigateTo page -> ( {model | activePage = page}, Cmd.none )

Our view should be different now depending on the active page we are viewing:

view : Model -> Html Msg view model = case model.activePage of BlogList -> viewBlogList model.posts Blog -> div [] [ text "This is a single blog post" ] viewBlogList : List Post -> Html Msg viewBlogList posts = div [] [ List.map viewPost model.posts ]

Next, let’s wire the update with the rest of the code. First, we fire the message to change the page to the views:

viewPost post = div [ onClick <| NavigateTo Blog ] [ text post ]

And as a last step, we replace the main function with a more complex function from the Html package (but still a beginner program):

main : Program Never Model Msg main = beginnerProgram { model = model , view = view , update = update }

But we still have not properly represented the single blogs on their individual pages. We will have to update our model once again along with our definition of Page:

type alias Model = { posts : Dict PostId Post , activePage : Page } type alias PostId = Int type Page = List | Blog PostId model : Model model = { posts = Dict.fromList [(1, "First blog"), (2, "Second blog")] , activePage = List }

And with some minor changes, we have the views working again:

view : Model -> Html Msg view model = case model.activePage of BlogList -> viewBlogList model.posts Blog postId -> div [ onClick <| NavigateTo BlogList ] [ text "This is a single blog post" ] viewBlogList : Dict PostId Post -> Html Msg viewBlogList posts = div [] (Dict.map viewPost model.posts |> Dict.values) viewPost : PostId -> Post -> Html Msg viewPost postId post = div [ onClick <| NavigateTo <| Blog postId ] [ text post ]

We do not see yet any change on our site, but we are ready to replace the placeholder text of the individual pages with the content from the real Post. And here comes one of the cool functionalities of Elm, and one of the reasons of why Elm has no Runtime exceptions. We have a postId and we can get the Post from the list of posts we have on our model. But, when getting an item from a Dict, we always risk the possibility of trying to get an non-existing item. If we call a function over this non-existing item, it usually causes errors, like the infamous undefined is not a function. On Elm, if a function has a chance of return the value or not, it returns a special variable type called Maybe.

view : Model -> Html Msg view model = case model.activePage of BlogList -> viewBlogList model.posts Blog postId -> let -- This is our Maybe variable. It could be annotated as `Maybe Post` or a full definition as: -- type Maybe a -- = Just a -- | Nothing post = Dict.get postId model.posts in case post of Just aPost -> div [ onClick <| NavigateTo BlogList ] [ text aPost ] Nothing -> div [ onClick <| NavigateTo BlogList ] [ text "Blog post not found" ] Loading the Data from the Backend

We have all the functionality ready, but we have to do something else before loading the data from the backend. We have to update our Post definition to match the structure of the backend. On the Drupal side, we left a simple blog data structure:

  • ID
  • Title
  • Body
  • Creation date

Let’s update the Post, replacing it with a record to contain those fields. After the change, the compiler will tell us where else we need to adapt our code. For now, we will not care about dates and we will just take the created field as a string.

type alias Post = { id : PostId , title : String , body : String , created : String } model : Model model = { posts = Dict.fromList [ ( 1, firstPost ), ( 2, secondPost ) ] , activePage = BlogList } firstPost : Post firstPost = { id = 1 , title = "First blog" , body = "This is the body of the first blog post" , created = "2018-04-18 19:00" }

Then, the compiler shows us where we have to change the code to make it work again:

Elm compiler helps us find the errors -- In the view function: case post of Just aPost -> div [] [ h2 [] [ text aPost.title ] , div [] [ text aPost.created ] , div [] [ text aPost.body ] , a [ onClick <| NavigateTo BlogList ] [ text "Go back" ] ] -- And improve a bit the `viewPost`, becoming `viewPostTeaser`: viewBlogList : Dict PostId Post -> Html Msg viewBlogList posts = div [] (Dict.map viewPostTeaser model.posts |> Dict.values) viewPostTeaser : PostId -> Post -> Html Msg viewPostTeaser postId post = div [ onClick <| NavigateTo <| Blog postId ] [ text post.title ]

As our data structure now reflects the data model we have on the backend, we are ready to import the information from the web service. For that, Elm offers us a system called Decoders. We will also add a contrib package to simplify our decoders:

elm package install NoRedInk/elm-decode-pipeline

And now, we add our Decoder:

postListDecoder : Decoder PostList postListDecoder = dict postDecoder postDecoder : Decoder Post postDecoder = decode Post |> required "id" string |> required "title" string |> required "body" string |> required "created" string

As now our data will come from a request, we need to update again our Model to represent the different states a request can have:

type alias Model = { posts : WebData PostList , activePage : Page } type WebData data = NotAsked | Loading | Error | Success data

In this way, the Elm language will protect us, as we always have to consider all the different cases that the data request can fail. We have to update now our view to work based on this new state:

view : Model -> Html Msg view model = case model.posts of NotAsked -> div [] [ text "Loading..." ] Loading -> div [] [ text "Loading..." ] Success posts -> case model.activePage of BlogList -> viewBlogList posts Blog postId -> let post = Dict.get postId posts in case post of Just aPost -> div [] [ h2 [] [ text aPost.title ] , div [] [ text aPost.created ] , div [] [ text aPost.body ] , a [ onClick <| NavigateTo BlogList ] [ text "Go back" ] ] Nothing -> div [ onClick <| NavigateTo BlogList ] [ text "Blog post not found" ] Error -> div [] [ text "Error loading the data" ]

We are ready to decode the data, the only thing left is to do the request. Most of the requests done on a site are when clicking a link (usually a GET) or when submitting a form (POST / GET), then, when using AJAX, we do requests in the background to fetch data that was not needed when the page was first loaded, but is needed afterwards. In our case, we want to fetch the data at the very beginning as soon as the page is loaded. We can do that with a command or as it appears in the code, a Cmd:

fetchPosts : Cmd Msg fetchPosts = let url = "http://drelm.local/jsonapi/blog" in Http.send FetchPosts (Http.get url postListDecoder)

But we have to use a new program function to pass the initial commands:

main : Program Never Model Msg main = program { init = init , view = view , update = update , subscriptions = subscriptions }

Let’s forget about the subscriptions, as we are not using them:

subscriptions : Model -> Sub Msg subscriptions model = Sub.none

Now, we just need to update our initial data; our init variable:

model : Model model = { posts = NotAsked , activePage = BlogList } init : ( Model, Cmd Msg ) init = ( model , fetchPosts )

And this is it! When the page is loaded, the program will use the command we defined to fetch all our blog posts! Check it out in the screencast:

Screencast of our sample app

If at some point, that request is too heavy, we could change it to just fetch titles plus summaries or just a small amount of posts. We could add another fetch when we scroll down or we can fetch the full posts when we invoke the update function. Did you notice that the signature of the update ends with ( Model, Cmd Msg )? That means we can put commands there to fetch data instead of just Cmd.none. For example:

update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of NavigateTo page -> let command = case page of Blog postId -> fetchPost postId BlogList -> Cmd.none ( { model | activePage = page }, command )

But let’s leave all of this implementation for a different occasion.

And that’s all for now. I might have missed something, as the frontend part grew a bit more than I expected, but check the repository as the code there has been tested and is working fine. If yuo have any question, feel free to add a comment and I will try to reply as soon as I can!

End Notes

I did not dwell too much on the syntax of elm, as there is already plenty of documentation on the official page. The goal of this post is to understand how a simple app is created from the very start and see a simple example of the Elm Architecture.

If you try to follow this tutorial step by step, you will may find an issue when trying to fetch the data from the backend while using elm-reactor. I had that issue too and it is a browser defense against (Cross-site request forgery)[https://es.wikipedia.org/wiki/Cross-site_request_forgery]. If you check the repo, you will see that I replaced the default function for get requests Http.get with a custom function to prevent this.

I also didn’t add any CSS styling because the post would be too long, but you can find plenty of information on that elsewhere.

Continue reading…

Mark Shropshire: CharDUG Presents Drupal 8 Training Series

3 days 6 hours ago

During the past few CharDUG (Charlotte Drupal User Group) meetings, I realized that we have a real need to help our existing Charlotte area Drupalers using Drupal 7 to move on to Drupal 8. There is also a huge opportunity to train those completely new to web development and Drupall. Out of some recent conversations, I have put together a training series that will span the next 6 months and become the focus for each of our monthly meetings on the 2nd Wednesday of each month.

The CharDUG Drupal 8 Training Series is a comprehensive set of training workshops to get attendees up to speed with all aspects of Drupal 8. Whether you are brand new to Drupal, focused on content management, a frontend or backend developer, or devops engineer, this series contains what you will need to utilize Drupal 8 to its potential. Attendees should bring friends, laptops, and questions. There is no need to attend all sessions, though recommended for those new to Drupal. An outline of what to expect is provided below. If you have questions or suggestions, don’t hesitate to reach out on meetup.com or @chardug.

Drupal 8: Installation and Features
  • June 2018
    • What is Drupal?
      • Community
      • Open Source
      • Contributing to Drupal
    • How to install Drupal 8
    • Important features of note
    • Changes since Drupal 7
    • Drupal release schedule
Drupal 8: Site Building
  • Julu 2018
    • Installing Drupal 8
    • Using Composer with Drupal
    • How to build a site without any development
      • Installing modules and themes
    • Top Drupal 8 contributed modules
    • Users, roles, and permissions
    • Configuring Drupal
Drupal 8: Managing Content
  • August 2018
    • Content management concepts
      • Content types
      • Content authoring experience
      • Content moderations
      • Permissions and roles
      • Content scheduling
    • Creating and editing content
    • Blocks
    • Layout Builder
    • Paragraphs
Drupal 8: Layout and Theming
  • September 2018
    • Layout options
    • Responsive design
    • Images and media
    • Developing a Drupal theme
    • Drupal and responsive layouts
    • Decoupled Drupal frontends
Drupal 8: Module Development
  • October 2018
    • Developing a module
      • Menus and routes
      • Permissions
      • Creating pages and admin forms
      • Event subscribers
    • Writing and running tests
Drupal 8: Deployment and Next Steps
  • November 2018
    • Deploying to production
      • Development workflows
      • Security
        • Guardr
      • SEO
    • Next steps for Drupal
      • Drupal 9
      • New initiatives
      • Decoupled Drupal
Blog Category: 

Jeff Geerling's Blog: Hosted Apache Solr's Revamped Docker-based Architecture

4 days 7 hours ago

I started Hosted Apache Solr almost 10 years ago, in late 2008, so I could more easily host Apache Solr search indexes for my Drupal websites. I realized I could also host search indexes for other Drupal websites too, if I added some basic account management features and a PayPal subscription plan—so I built a small subscription management service on top of my then-Drupal 6-based Midwestern Mac website and started selling a few Solr subscriptions.

Back then, the latest and greatest Solr version was 1.4, and now-popular automation tools like Chef and Ansible didn't even exist. So when a customer signed up for a new subscription, the pipeline for building and managing the customer's search index went like this:

Original Hosted Apache Solr architecture, circa 2009.

DrupalEasy: Using the Markup module to add content to your entity forms

5 days 21 hours ago

Have you ever been building a form and found yourself wishing that you could insert additional help text - or even other forms of content (images, video) inline with the form? While each field's "Description" field is useful, sometimes it isn't enough.

The Markup module solves this problem in an elegant way by providing a new "Markup" field type.

 

This field doesn't expose any input widgets to the end user, rather it just allows for site builders to add additional markup (content) to an entity form.

 

The markup isn't saved with the resulting entity - it's just there to provide additional information to the user filling out the form.

Granted, this has always been possible by writing a small custom module utilizing hook_form_alter(), but having it as a field type makes it much more convenient to use.

 

Checked
10 hours 59 minutes ago
Drupal.org - aggregated feeds in category Planet Drupal
Subscribe to Drupal Planet feed