A new system for managing configuration

gdd's picture

This is a proposed game plan for managing configuration information in Drupal 8. This is focused on config data - the kind of thing that is currently being managed by the variables system as well as more complicated items like Blocks or Views.

Thanks to the following for their input and inspiration:

  • Sam Boyer
  • Nathaniel Catchpole
  • Jeff Eaton
  • Larry Garfield
  • Earl Miles
  • Pierre Rineau
  • Bojhan Somers
  • David Strauss

What This Buys Us

  • Configuration becomes fully revisionable and manageable through code using existing deployment tools. We will also now have the opportunity to do ui-based undo of configuration, including multiple level undo.
  • We can access variable configuration information very very early at bootstrap without having to boot the database layer, offering many gains in performance and flexibility.
  • Increased flexibility for organization of data, variables table goes away. The myriad of one-off SQL stores for configuration also go away, as the core configuration system can handle them directly.
  • By offering a standard interface, we get a lot more flexibility to do things like locking everything on one server (Zend has a readOnly flag that can be set dynamically) preventing changes on one server while enabling them on another.

What Is It

  • Configuration information should be stored on disk, in an easily manageable format like JSON (leave aside for the moment the question of what is and isn't configuration.) JSON has the advantage of being easily manageable in PHP in a reasonably performant way, as well as being non-executable which is advantageous from a security perspective. Drupal will interact with this JSON, reading and writing directly to and from it. The DB will not enter into the picture, except for caching and history, and even this will be pluggable to allow other options.
  • An interface will be provided for this data which will take its inspiration from Zend Config (for more information see my previous writeup). Basically, information will be stored hierarchically in a tree of configuration objects accessible through magic get/set functions for easy management (if this doesn't kill us from a performance perspective, see http://www.garfieldtech.com/blog/benchmarking-magic). These objects will be iterable and countable. We will provide a basic implementation of this interface for storing standard scalar data just as you would find in the variables table. In terms of organization, my idea is that the config information is stored in a tree, and each top branch is a module, plus a special top branch for core (possibly.) So you could have stuff like


    Under each branch, it would be up to the module to store things as they see fit. For more complicated stuff, we can do new implementations of the config interface (or extend the basic one). So we can have a special config object for blocks, and Views can do their own, and Panels, etc. This allows us the benefit of having a standard interface for accessing and saving this data, while still allowing modules with more complicated requirements to manage those requirements as they need.

    We should also offer the optional to organize this data into Features-style 'functional' structures. For instance


    and the like. The functional settings could be set to save to a separate directory, and we could define an order of overrides, probably with "local" winning out in all cases where there's a collision. Later, we can add sophisticated analysis tools to display overrides, much like the CSS tools in Chromium and Firebug.

  • When changes are saved back (through changes in the admin interface or other method) old revisions are stored to a configurable location (the database, a special directory, etc.) These may be a full copy of the previous object, or just a diff. This will allow us the ability to implement rollback and undo. Additionally, since this is all stored on disk, it can be versioned through whatever system you are using for code versioning. This should be more of a CRAP implementation than a CRUD one. We can offer various rules for purging as need be. If possible we should store a pretty significant number of revisions before purging off the old ones. Imagine Time Machine-like functionality where you can step through old revisions of your site configuration, reverting to the one from 8 days ago.)
  • My current idea is that each 'branch', stored in a config directory which is server-writeable but protected from arbitrary reading via .htaccess rules. So for instance

    $config->drupal->site_information->site_name lives in /config/drupal.site_information.json
    $config->drupal->regional->default_country lives in /config/drupal.regional.json
    $config->my_module->foo->bar lives in /config/my_module.foo.json

    This allows us to lazy load bits of config data without ever having to expand the entirety of Drupal configuration into memory. The performance implications around reading/writing/caching this are not really my specialty, and I am open to suggestions.

    Because it is server-writeable, we will probably have to start taking a generally less trustworthy attitude towards this data.

  • The interface should also support reasonably advanced merge operations, offering the opportunity to merge together a partial config with a full one, replacing only those items that exist in both. For instance say you have a client with a large number of subsites (think departments at a university.) For the most part, these are all the same, but each one has little tweaks. Say that one department wants their slideshow to have 10 images instead of 5. You override the View, change the pager, and there is a way to write out only the changed items, essnetially a diff. When the View is shown, you load the master view, then merge in the diffs. Huge gain: if tomorrow the university decides it wants all the captions in the slideshow to be bolded, you can make this change in the master view and it will still propagate down to the overriden ones (as long as that particular property is not actually changed.) This kind of system could also be used to manage server-specific information like database settings, google maps api keys, etc. Exact implementation of this TBD.

    Modules will also be able to ship with a default set of settings of some sort (perhaps a module.settings.json file or a hook.) If these settings are changed, then just like above the diff is saved and merged in. I don't know if this is realistic to do performantly, or how it messes with the opportunity to have config data available at bootstrap without spinning up the db.

  • How this whole thing is built up and accessed is up for debate. One easy thing to do would be to simply build and merge the entire config object as one of the first steps in bootstrap, making it a singleton accessible system-wide. However this has serious memory consequences, especially as we start storing more complex data in it. Another option is that when we access for instance


    We just load that one item (or branch) and then let it go when we're done. Better for memory but it means we end up possible re-opening and parsing the file a lot more often (or the cached version.) Or we do a mix of both (lazy load whats needed as needed, but keep it around in the singleton for later.) A lot of experimentation is necessary to fully understand the pros and cons here. Architecturally, we should try to make the API independent of the file organization so that file organization is not an API break. "Use uncertainty as a driver."

  • Catch has brought up the prospect of pluggable storage backends. For instance, instead of writing to the file system we could write to apc, chdb, hidef or some other system for increased performance or other needs. This is an interesting idea although some of these systems have limitations like only being able to save scalars and not more complex data. Still I don't see why that should prevent us from offering up the ability to plug other storage backends in. Also, since each 'branch' of configuration is its own config object, there's no reason why we couldn't have pluggable storage per branch. So if you want to store the variables table in apc_define_constants, and views on disk, and everything else in the DB, then you could. David has also suggested ZooKeeper (https://github.com/andreiz/php-zookeeper) as a possible first implementation, which sounds pretty interesting. It has both REST and native PHP interfaces, and supports JSON and hierarchical organization.
  • This will replace a default settings.php and $conf. It is possible we will still need the ability to let modules set stuff in an executable file that is loaded before any work is done (Domain Access has been put forth of an example of this) but if so, we still don't have to ship with it.

How We Do It

  1. Create a system for reading/writing to disk, start with json as a testbed. Make this an interface that anyone writing a format has to implement.
  2. Create the interface and an implementation for a basic config object, basically a port of zend config. Nothing fancy. Should have option to use magic functions and explicitly created equivalents so we can performance test.
  3. Figure out a good way to interface with this for variables. Implement variables table with it. Do a bunch of testing and iterating.
  4. Work through core replacing variable_set() and variable_get() as needed, including stuff like system_settings_form()
  5. When we have this working well, we can start to work on some more complex implementations. Block config might be a good use case here.

There is a bit more unknown in this system than in the UUID system, which already has several implementations in Drupal-land. I suspect that after implementing the basics, we will learn a lot that will inform what our next steps will be.

What an Go Wrong

Major areas of concern

  1. Performance - Will require an enormous amount of testing and iteration. We can not slack off here and must start hammering things and optimizing as early as possible and continuously throughout the cycle of development to prevent regressions. I would love to see this somehow integrated into the testbots, alternately will heavily lean on Catch and others with expertise in this area (and start to develop some myself)
  2. Security - My hope is that by writing non-executable data to an isolated directory protected by .htaccess, we will mitigate most major concerns. (This model has served us well in the files directory.) The one thing I am concerned about is what we do with more sensitive data IE: do we store our database password here? I look to the experts for advice.
  3. What do we do about content type/field config changes? - They can perform permanent modifications to the underlying data structures (and thus the content contained within) then doing rollback/updates of them can be incredibly tricky. This may be something we just have to punt on, or we can follow the Features model where moving to an old or newconfiguration may require undesirable steps to achieve (like deleting all content from a content type), but it should always be possible and we give great warning to people who may be attempting something which is destructive. I am more than open to suggestions.
  4. Multisite / Sites.php - Haven't thought this through yet to be honest.


Nice writeup. Thanks. David

moshe weitzman's picture

Nice writeup. Thanks.

  1. David Strauss mentioned in his Core Conversations talk that we might have a validator which would reject changes that require data migration. Would be nice to add a 'proposed change validator' to the API.
  2. Perhaps we can brainstorm about a second system to convert after variables. Block config is so boring. I'd love to find a system that exercises the API a bit more and also delivers more value. I’d love to try to get field config into this system. The validator would reject field config changes that require data migration. Field API already does this - http://api.drupal.org/api/drupal/modules--field--field.crud.inc/function...). So yeah, I agree with punting on those changes.

I agree that moving

yched's picture

I agree that moving {field_config} and {field_config_instance} to the config system would be a good benchmark..

Notably, the amount of data (fields + instances across entity bundles) makes this an interesting case for the questions about caching and lazy loading. The current "one giant cache entry + static array" approach in D7's _field_info_collate_fields() has performance issues (see http://drupal.org/node/1040790)

Other Field API issues

yched's picture

Other questions raised by moving Field API to the config system :

  • Notification of change : regardless of the extend to which we can support mass data migration (i.e field schema update), some immediate action needs to be taken when a new field appears in the config files : mainly, a storage table needs to be created.
    Would this follow a workflow "a la features" ? E.g deploy config files through VCS up, run a "drush config-revert" command (or visit a special script similar to update.php ?), which then detects newly added fields and takes the corresponding programmatic actions ?

  • Special case of field deletion : reversedly, when a field (or instance) suddenly disappears from the config files. In D7, deletion of field data is done by batches during cron, which means we need to keep the field and instance definitions around for a bit in the {field_config_} tables, until all actual data is purged. If the definitions are just absent from the config files, it means we need to be able to reliably preserve them somewhere else (or be able to retrieve them from their latest state in config history - but not from a cache).

Obviously, discussion is open on whether the points above and in my previous comment make Field API a good 1st case or rather something to be kept for later :-)

justintime's picture

In the case of multiple servers using the same config in a load-balanced scenario, will there be any synchronization mechanisms provided out of the box? NFS can get messy, not many Drupal shops are using Puppet/Chef/et al. JSON lends itself well to being transfered via HTTP, but then you have issues with security. You could use the db to notify other instances of a configuration change.

At the minimum, a drush command would be quite handy to provide admins a way to reliably push configuration data from server A -> B,C,D.

Pluggable storage

gdd's picture

While the default implementation will write this data to disk, the actual storage implementation will be pluggable. So you could still store to the db if you wanted, and since the data will be isolated from content, it will still be easy to deploy between sites. You could even store it off to a remote location via web services. Lots of options will be available.

I would think that sites will

moshe weitzman's picture

I would think that sites will use either git pull or rsync to deliver new config to each server. those calls can be embedded in drush scripts/commands which operate over your web server farm. the commands/scripts will also want to trigger config refresh function in drupal. see drush's site aliases feature for running commands over multiple servers.


skwashd's picture

I have used GlusterFS to handle keeping files in sync across multiple webheads. It could be used for this too.


catch's picture

I don't have code committed yet, but I have been thinking a lot about this problem recently (mainly with caching, although it is exactly the same issue here), there are some notes at http://drupal.org/sandbox/catch/1153584 - these notes include summaries of discussions elsewhere on Drupal.org (some quite old).

IMO we absolutely need to allow sites to do something like this cleanly (i.e. without hacking core at all and with minimal overhead in switching to a different implementation), but I don't think there's a particular need to ship with that mechanism unless it makes something else easier.

Since that is dealing with caching rather than configuration I didn't include a drush command, there would be some value to adding either manual triggering of a refresh, or possibly using the queue system (to do that each webserver would need to be able to find out a unique identifier for the other servers when they want to make a change) - either of these would avoid runtime overhead from fetching tombstone records each request.

I like the pluggable storage.

pounard's picture

I like the pluggable storage. While this is good to have a common rule to rule them all, the configuration layer should be more abstract than this:

  1. Create an interface for configuration objects;
  2. And some implementations (one that stores on disk using the active record pattern would be so perfect for variables, because variables are almost readonly, except in development phase, so we can cache the result the same way core does right now).
  3. Let modules uses volatile configuration objects (typically, context objects do not need storage).
  4. Do not forget the merge() override() and other stuff functions, that could allow runtime variable override in some contextes (for example).
  5. Also like the fact to have readers and writers objects, that basically are not configuration-object-implementation agnostic (as soon as you have an interface there, it's OK). Using this, you can use a JSON writer to export a node as a structured array for example (even if it's not configuration), it then acts as a fully featured mapping layer API. They can help you to send/read any bits of configuration and use them anywhere.
  6. Exportable objects become then only an interface with one method: getSchema() or getConfig() (name is important, but any name that refers the new API name is good).

The basics arround having an independent interface, storage independent I mean, is that you can use it as a pure mapping layer for everything, everything does not need to be stored :)

For performance, using a granular caching mecanism (depending on most used configuration tree, or full may can be OK also, if it doesn't goes more than 500k (arbitrary value here) variables are already been stored this way anyway).

If the site configuration is a tree (finally \o/ !! yay, /me dancing!) you'll have to impose some "variable in the tree position" conventions for modules (don't mess up in the core subtree for example). You could maybe then cache on a by module basis, for example, such as:
-- 1 cache here --
-- 2nd cache here --
-- 3rd cache here --
-- 4th cache here --

EDIT: Removed useless text :)

I have a lot of thousands of ideas about the topic.

Re-EDIT: I like this other way of accessing variables:

// Can be NULL, can eventually throw exception (non existing key)
// Or just throw a watchdog error for developers
$foo = config_get('core.performance.pagecache.enabled');
config_set('core.performance.pagecache.enabled', TRUE);
config_exists('core.performance.pagecache.enabled'); // Does key exists?
config_revert('core.performance.pagecache.enabled'); // Revert to default

This kind of API is not applyable to every configuration objects, but can be easily applyable to variables themselves. If you force module to declare their schema in some info/ini file, for example (and you disallow new key creating at runtime) you will avoid developers making wrong variable usages (basically typo errors), or at least warn them, and you will also ensure that when you uninstall a module, you can remove its full configuration without keeping any garbage.


Another note:If you want to

pounard's picture

Another note:

If you want to filesytem based configuration (yay, I'd love:

echo "false" > /path/to/drupal/config/core/performance/cache/page

by the way ahah!) you have to beware of performances. On system with cache often being wiped out, you will totally crash sites with a slow FS. You should probably store configuration into chunks of variables together (a lot less files).

For example, Mono stores its registry (like yes, the Windows registry, but on UNIX systems because Mono roxx) in XML files, but it stores them almost on a per software basis (more or less complex structure in one file).
This is quite handy here because you trigger a lot less I/O (you don't read one file per variable, but one file per business context, better no?). But I don't like XML, there are a lot of other file formalisms to do that.

Another thing here, that is REALLY IMPORTANT is that those file should be human readable/editable. For sysadmin's, and for developers (JSON is not what I call human editable, INI files "a la Zend" totally are!).

EDIT: Those files could even be PHP settings files (why not?), while I wouldn't like it (it's unclassy) it would be cached by OPCode caches, and be ÜBER FAST to read.


How about YAML? Doesn't get

theunraveler's picture

How about YAML? Doesn't get more readable than that...


catch's picture

The two systems I'm thinking most about for this at the moment are hidef and chdb. They both offer persistent, read-only, key-value stores. I haven't played with them yet (hoping to do so in the next week or so). Right now they look like the absolute best option for both memory and cpu, and were specifically designed for this kind of use case. They are only going to be viable for sites which do not allow a lot of configuration changes in production - but I'm hoping that could be one of the base settings of the API anyway (regardless of using this system or not - i.e. you could have a policy where the files are only updated via version control if keeping everything in JSON).

Zookeeper looks interesting, I'd not seen it before.

The main thing with any of these is we should try to get an idea early on of their strengths and limitations, and make sure that whatever interface core provides doesn't rule out using them (unless they turn out to be completely unsuitable early on - but we should find that out now rather than in a year or two).

We're in a somewhat fortunate position compared to usual - when we first introduced the cache API to Drupal, memcache barely existed (if at all?).

chdb had it's first release last year, zookeeper looks like it started in 2008 - these are things that didn't even exist when Drupal 7 opened up for development, so we're able to build this with knowledge of these up front.

chaining and non-configuration variables

catch's picture

One more thing with this. Just because we might be able to want to store/load from chdb/hidef for some things doesn't necessarily mean replacing flat file storage with those - I would be more interested in having the flat file storage available on any Drupal environment (so I know where to find it to have a look), but then being able to take that information and compile it to go wherever - so possibly more chaining than plugging.

On top of that, there are several needs here which aren't covered yet:

sites.php and multisite in general.

settings.php - things like database configuration, definition of cache backends - if there is any configuration at all of the configuration backends/storage/caching, then we need some kind of hard coded minimal bootstrap to determine that to avoid circular dependencies.

settings.local.php - similarly hard-coded local overrides for different environments, core itself doesn't have settings.local.php but many, many real sites do. It may be we'd want to limit the scope of $conf tweaks to things needed before the full configuration system is available - currently variable_get() accounting for both cases is a nightmare.

Dynamic/volatile variables - if we get rid of the variables table and cache for configuration, then we need somewhere to put all the abuses of that system for things that aren't configuration: content cache clear timestamps, css and js aggregate file mappings, path alias whitelist - and this is just what I can remember in core. cache_get() is not suitable for all these by itself - you want persistent write through for some of them. Also right now they are mainly in variables to avoid separate cache_get() requests for very very high use items, so we may need to look at something like a shared cache group/item backed by the database. Either way - we need to be very aware of the way the current variables system is being abused (with varied levels of success) and not make that any worse than it already is - ideally improve it considerably.

persistent variables and local config

dashohoxha's picture

Although it is a bit late and I don't know where the things are now, I support this comment. That is, we should be able to distinguish between configurations and persistent variables. For example, use config_get() and config_set() for configuration, and keep using variable_get() and variable_set() only for persistent variables.

I think that we should also be able to distinguish between general configurations and local or site-specific ones. All the configurations should be exported when you make a backup of the site, but local/site-specific ones should not be exported when you make a distribution. For example, for a module that handles sending emails, smtp_user and smtp_pass can be site specific config, not exported when you generate a distro. And then, during a distro installation, some standard/automatic way can be provided to prompt the user for filling all the site-specific configs.

Just some though there:


translation of configuration

Gábor Hojtsy's picture

Clearly, certain configuration options should be translatable. The default date format, site name, anonymous username, whatever. There are lots of such settings currently, which need to be translatable. Looks like those will be sprinkled all around in this system. Now there is not a lot of discussion here as to how the storage and API itself will relate to the UI of configuration itself. For multilingual sites, both storage and UI is an interesting question. What do you think about this?

EDIT: also now that I read on your third post that the working assumption is that "If it is an Entity it is content, if not it is configuration", basically we'd need to cover configuration translation and entity translation. We already have entity field translation, and going to have a more general entity set translation. For configuration, we need the same two approaches: ie. replace values in configuration or have different configuration per language. For site name, you'd probably replace values. For your contact form configuration, you might want to limit certain config forms to certain languages, so you'd have different configuration per language (if your project needs that).

When building sites

tsvenson's picture

Great writeup Greg. I don't really understand everything, but get the big picture and what this will mean for building and running sites with Drupal 8.

As I see it there are several levels of configuration, or settings, needed on a site for Drupal Core, contrib modules as well as features. Some should only be accessible by the site builder/webmasters, while others by content administrator, editors, designers as well as users.

I will use a module that offers block as a simple example. At first the module is installed and enabled, then the basic configuration is done by the sitebuilder/webmaster. However it might also be wanted that an editor of a section on the site can change some configuration for it, for example overriding the block title or the number of items listed on their section. If its a block that is viewed on a user profile page, then you might want to be able to allow users to make similar overrides as well.

So, basically what I am trying to get at is if it will be possible to do something like this with the new configuration system?

If I understand things correct, this system will make it it much easier to separate a website into sections that can have different configurations for the same features. How will this then work with the permission system as well as theming layer? In some parts I suppose it will be possible to do things that you today needs OG for.

You mention the possibility of being able to do rollbacks to earlier versions. How will this be handled. Can I for example make website restore points like the system restore points you can do in Windows?

One thing that wasn't very clear is if staging and deployment from dev->test->live sites will be part of this?

Since these configurations are stored on disk, it would be nice if Core also had features to manage configuration templates that easily can be applied, imported and exported.

Edit: Lastly will it also take into count what device the content is viewed on? That is, I can have different configuration depending on if its a desktop or smartphone for example?

T: @tsvenson | S: tsvenson.com

A couple of questions

tinyrobot's picture

How much more performant is it going to be not to boot the database layer? Because if this is not that significant it would make more sense to find a database that stores data on disk in a human readable format (I am guessing that is why json is being suggested) and write a PDO Driver, so the Configuration API can be built on top of the Database API instead of creating another storage system.

Also, could someone explain how storing configuration on disk, in json, buy us anything other than the ability to version-control this stuff? And why is version-control important to this project?

If this project was built on top of the Database API then work on the important stuff can start right away, and the relevance of the storage method can be dealt with later.


Version control is critical

skwashd's picture

If you have tried to move configuration from one Drupal environment to another you will know how painful it can be. Moving configuration out of the database makes it portable. It also has the advantage of it being version control friendly. A bonus of keeping config under version control is that I can compare the on disk configuration to the intended config in the repo.

The proposal is more than just moving config to files on disk. It is able changing the way Drupal handles configuration. Currently we have settings.php, {variable}, various module specific implementations and strongarm. settings.php will remain but the rest of the stuff will be managed by a common configuration API, that should be performant, while also being developer and version control friendly. As I see it heyrocker is proposing to add a little complexity to remove a lot more complexity and duplication.

Version Control is Good

tinyrobot's picture

Moving configuration out of the database makes it portable.

This is not a precondition to portability. We can port stuff in the database right now (Entity API, and Features)

The proposal is more than just moving config to files on disk. It is able changing the way Drupal handles configuration. (…) rest of the stuff will be managed by a common configuration API

I agree that a standard way of dealing with configuration makes it portable, because we will know exactly how to read and write configuration.

Thanks for your input


Seems a bit overengeneering

pounard's picture

Seems a bit overengeneering here. The goal to have database agnostic configuration (except for caching, which will be really important here if many files are being used to avoid I/O) is exactly not to use the database layer at all. While database is database and will remain a database (pretty much offers ACID compliant storage) having a file based backend using the database API seems like trying to make people believe they actually can use ACID compliant files (which, in the end, is basically re-developping a DBMS).
If you want a local DB storage, use SQLite :)

The goal of using clean formatted and human readable/editable file is, in the end, to allow multiple environments configuration (devel, prod, staging, etc...) live in the same common configuration, and allow to keep all in one place. Once you've that, you can start versionning your sites in the development process and easier configuration sharing among the different instances using the same tool than code itself.



tinyrobot's picture

Sorry I guess I posted twice


Thank you for your response

tinyrobot's picture

The goal to have database agnostic configuration is exactly not to use the database layer at all.

I understand that this is one of the goals of this project, but I don’t understand why. Could you tell me more about the benefits, or point me to a discussion where this is addressed?

“While database is database and will remain a database having a file based backend using the database API seems like trying to make people believe they actually can use ACID compliant files (which, in the end, is basically re-developing a DBMS).”

We are not deceiving anybody if we use an actual database system. I am not proposing that you create this file-based backend, I am saying that there are Database solutions out there that already do file, human readable storage, and that we can leverage on the Database API if we use them. Here is something from Google: http://exist.sourceforge.net/. I have never used this database; I am just saying that there are solutions out there that can be explored.

Using this solution would still allow you to keep the configuration separate, and versionable with all the benefits that you mentioned.

I hope you can see that I am not trying to over-engineer anything, I am trying to cut some of the work, so we can spend more time developing a great API, and not a unique way to store the configuration


Yes I can see it, but the

pounard's picture

Yes I can see it, but the idea of using actual Database API for handling flatten files data seems a weird idea. I think having a variable (don't like the name anymore, I think that schema would probably fit more, as in dconf schema) may utilize the database as backend. A variable schema access API seems to be so much simpler than Database itself, while database could be a good backend for it, hidden behind the schema API (not talking about the actual Drupal schema API, but about configuration schema API there).

Configuration API can be built on top of the Database API instead of creating another storage system.

And what would you do with all functions of DBTNG that doesn't fit with the configuration schema at all? Throw "not implemented exception"? The API doesn't fit here IMHO.

eXist-db sound like great stuff, didn't know about it before know. Nevertheless they seems to use more XPath, XQuery, XSLT and such facilities more than something that look like Drupal database. They use the tools that fit with the need, and they strongly oriented theirs arround XML there. A good configuration API would probably be a lot more simple API, and backends would be easier to develop than fixing DBTNG as access API and complexify a lot backends development.



tinyrobot's picture

After reading your comment, and thinking for a while, I do think that the database API is overkill. I was thinking that the Database API could be treated as a general Storage API, but it is not that! That must be why they named it Database API, and not Storage API.


Capturing diffs as overrides

nedjo's picture

In D6 and D7 overrides are generally saved as forks. E.g., a default view is overridden in the database with a full copy of the view. Any further changes to the original are not passed to the override.

This approach creates a lot of challenges in updating distributed configuration. E.g., sites based on Features are customized via overrides and then cannot easily upgrade to later releases without losing the customizations.

As part of an improved configuration storage system, we should consider whether we can capture in overrides only the difference (additions, changes, and deletions). I've had a very preliminary crack at this problem in the Features override module.