Performance results of table-per-field vs. table-per-content type

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
You are viewing a wiki page. You are welcome to join the group and then edit it. Be bold!

We considered preserving support for table-per-content type. There are a lot of reasons why we wouldn't want to, and one of the primary objections to abandoning them is the performance penalty of increased JOINs. So, I decided to run some benchmarks.

I set up a benchmark environment:
* 64-bit RHEL 5
** Dual quad-core Xeon
** 8GB RAM
** RAID-1 10K RPM SAS
** PHP 5.2 with APC
** MySQL 5.0 (InnoDB)
** Apache 2.2
* Drupal 6.8
* CCK 6.x-2.1
* Devel 6.x-1.12
* Views 6.x-2.1

I set up a content type with the following fields (aside from included files):
* Text (single, unshared)
* Number (single, unshared)
* Text (single, shared)
* Number (single, shared, integer)

I used the Devel module to generate 10,000 nodes, and it populated the CCK fields, too.

Then, I created two views:
* Page view that filters by "published," Number (single), sorts by Text (single), and displays Text (single).
* Page view that filters by "published," Number (shared), sorts by Text (shared), and Text (shared).

I disabled the Views cache to not benefit too much from the repeated requests to the same pages. In reality, this cache mitigates the performance disadvantage of increased JOINs.

I disabled the query cache in MySQL to not benefit too much from the repeated requests to the same pages. In reality, this cache also mitigates the performance disadvantage of increased JOINs.

Then, I used Apache Bench to get results from this worst-case-scenario test for putting all fields in their own tables:

[straussd@web3 ~]$ ab  -n 1000 -c 5 http://fieldmark.fkbuild.com/benchmark_single
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking fieldmark.fkbuild.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests


Server Software:        Apache/2.2.3
Server Hostname:        fieldmark.fkbuild.com
Server Port:            80

Document Path:          /benchmark_single
Document Length:        30082 bytes

Concurrency Level:      5
Time taken for tests:   47.627297 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      30586000 bytes
HTML transferred:       30082000 bytes
Requests per second:    21.00 [#/sec] (mean)
Time per request:       238.136 [ms] (mean)
Time per request:       47.627 [ms] (mean, across all concurrent requests)
Transfer rate:          627.14 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   186  237  25.1    233     380
Waiting:      181  232  25.1    228     376
Total:        186  237  25.1    233     380

Percentage of the requests served within a certain time (ms)
  50%    233
  66%    242
  75%    249
  80%    254
  90%    270
  95%    282
  98%    311
  99%    323
 100%    380 (longest request)
[straussd@web3 ~]$ ab  -n 1000 -c 5 http://fieldmark.fkbuild.com/benchmark_shared
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking fieldmark.fkbuild.com (be patient)

Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        Apache/2.2.3
Server Hostname:        fieldmark.fkbuild.com
Server Port:            80

Document Path:          /benchmark_shared
Document Length:        32382 bytes

Concurrency Level:      5
Time taken for tests:   77.976 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      32886000 bytes
HTML transferred:       32382000 bytes
Requests per second:    12.82 [#/sec] (mean)
Time per request:       389.881 [ms] (mean)
Time per request:       77.976 [ms] (mean, across all concurrent requests)
Transfer rate:          411.86 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       57   65   4.2     65      81
Processing:   266  324  38.3    317     525
Waiting:      253  309  38.0    302     511
Total:        326  389  38.9    382     585

Percentage of the requests served within a certain time (ms)
  50%    382
  66%    401
  75%    412
  80%    417
  90%    442
  95%    463
  98%    493
  99%    507
 100%    585 (longest request)

Conclusion: For the scenario above, which is a worst-case scenario because of multiple layers of disabled caching, we experience a 64% penalty for moving fields into their own tables.

Comments

Sorry, these benchmarks are

merlinofchaos's picture

Sorry, these benchmarks are not valid. Please repeat with 10, 25, 50 and 100 fields on a node.

I disabled the Views cache

merlinofchaos's picture

I disabled the Views cache to not benefit too much from the repeated requests to the same pages. In reality, this cache mitigates the performance disadvantage of increased JOINs.

You must believe there is some level of caching that does not exist, because this statement is not true.

(As for my above comment about the benchmark being invalid, I do not believe that the case of 2 fields with separate storage to be a good measure. Maybe 25, 50 and 100 is going overboard, but we MUST, IMO, compare the 10 field case).

Red flags and scary dragons

eaton's picture

At present, the ability to keep things in a single table is the only thing that keeps a number of high performance Drupal sites I work with online. Among other things, per-field storage eliminates our ability to add compound indexes to the columns in an entity's field table. That's one of the only real "easy wins" in Views/CCK performance optimization.

Agreed with merlinofchaos that the 2-4 fields case is a BEST case scenario, not a worst case. When working with teams converting to Drupal from other platforms, teams evaluating the performance implications of their choices, one of the first questions they ask about CCK is, "Is the data properly denormalized?" This is not platform snobbery: it's a critical consideration, and while it's one of the hardest things to manage cleanly in CCK, it's one of the really important ones.

Before it's said, "No one needs 64 fields on a node" is exactly the same as "no one needs more than 640K of memory." If CCK is going to become a core part of Drupal, a fundamental part of how nodes and other entities work, it has to account for these things or it will fail, and drag Drupal down in the process.

also

eaton's picture

I disabled the Views cache to not benefit too much from the repeated requests to the same pages. In reality, this cache mitigates the performance disadvantage of increased JOINs.

Also, to clarify earl's comment, the views cache is for metadata describing table and field schemas, NOT for the results of queries. it does nothing to mitigate increased joins.

The reason this is a "worst

David Strauss's picture

The reason this is a "worst case" scenario is the lack of any page, views, or MySQL result caching. Adding these narrows the performance gap significantly.

As for needing 10+ fields, you're misinterpreting the MySQL query execution logic I'm trying to analyze. MySQL attempts to filter and sort the result set size as quickly as possible. This approach involves JOINing against only the tables necessary to do so. MySQL does not blindly JOIN all tables together before thinking about what belongs in the results. The Views I created are "worst case" in this sense because the filters span two tables and the sort occurs on a third. I've forced MySQL to pull together multiple tables before being able to fully decide its result set contents and order. Adding additional fields that are included for displaying results has negligible (or can have negligible with the right Views design) impact on performance.

I'm not trying to answer, "How would table-per-field affect the Drupal 6 Views architecture?" I'm trying to answer, "How would table-per-field affect the best reasonable Views architecture?"

If you say so, but frankly I

merlinofchaos's picture

If you say so, but frankly I think this benchmark is missing the true worst case scenario, and if you're using this to justify what you're doing, you're doing yourself and the community a great disservice.

Frankly, if you guys decide to go ahead and try to get this into core like that, I think you're insane. I think these benchmarks miss the real problem, and this architecture goes into core over my dead body.

(Since I'm not a committer to core that's not a huge impediment, mind you, but I'm serious that this direction is a mistake).

Jumping in...

jredding's picture

I'm flabbergasted that a benchmark of a node type with 4 fields is being used as justification to do fields in core "per-field". Nodes continually have well more than 4 fields and they are almost always much more complex than this. What are we going to do when a node has 62 fields? (hey it happens)

Maybe this whole thing needs better documentation or explanation but as it stands now with the documentation and information available in this group I'm amazed that this entire thing is slipping by the community and about to committed to core in this fashion.

Please tell me I'm missing something here and things are A-OK in wonderland.

-Jacob Redding

-Jacob Redding

Guarded

csevb10@drupal.org's picture

I don't think that everything - or much of anything based on some comments - is slipping by the community. I know of plenty of people watching the entire fields-in-core discussion with great interest, but we're definitely in a wait-and-see sort of pattern at the moment. My understanding is that nothing that is done is going to be immediately committed to core, so, I figure we'll have an opportunity to test and review everything once the code sprint is completed before rendering any opinions.

Like many others, I'm concerned about the performance implications of the per-field storage decision, but, at the same time, the community has put some of the best developers together to review the situation and make intelligent decisions. They are spending all day (and night?) thinking about this, and making decisions from an informed vantage point. I can't say that I'll be completely comfortable - ok, comfortable at all - until I've seen things come fully together and had an opportunity to review real-world benchmarks on performance, but I do want to give them a chance to deliver their vision for consideration. I've seen many of the performance pitfalls in action, too, so I certainly don't want us to stumble blindly into mistakes, but my feeling is that everyone on that team has seen the same issues and approaches the situation from a similar background. I'm not ready to triumph a monumental leap forward for Drupal, but I'm not ready to assume all hope is lost either. I look forward to seeing the conclusion of the sprint and the released code, and being able to objectively make a decision then. That being said, our last project had half a dozen double-digit content types, so I'm leery of the idea that 4 fields represents a real-world worst-case scenario. Even if the performance implications are minimal for adding another 6 fields, I'd rather see the test conducted in that fashion to improve the reality of the test.

this is what...

jredding's picture

got me worried
http://groups.drupal.org/node/17576

"Current status from the Fields in Core sprint, via chx in #drupal

Database first:

Storage will be per-field."

So according to the posts in this group the decision has been made or at least partially decided upon. And without tons more of information and me completely butting in, I'm concerned and expressing that concern.

I like this update http://groups.drupal.org/node/17654
which says
"And, of course, we have yet to convince the community that our approach (particularly the per-field storage tables) is correct. "

At this point I'm very much not convinced.

So yes maybe I'm exaggerating a bit but this bit of code will have serious implications on Drupal and I've seen more discussion on what word to use taxonomy or category.

-Jacob Redding

-Jacob Redding

True

csevb10@drupal.org's picture

I'll admit that it concerns me, too, and I haven't been convinced yet, either. Likewise, I'm worried that a discussion about changing search terms garners a significantly more vibrant reaction than a discussion about underlying architecture. That being said, I think that the forum on groups.drupal.org commands less visibility that posts on d.o, so hopefully we'll see a better community interaction once this makes it way back into those forums.
--Bill

The Cool Stuff

mfer's picture

I wouldn't be worried about this. Not at this stage. The people involved know their stuff much more so than most of the drupal community and they understand the benefit of performance and how to get there more than most. The approach they are taking has some nice advantages that are going to carry over to other modules.

For example, thanks to Materialized Views (a new core module that will be usable by other modules) loading all of the field info for a node will just be one query with no joins. And, this is with field storage on a per field basis.

Most devs don't know what materialized views are. Take some time and learn what they are doing because it's some really powerful stuff. Before you jump to conclusions take some time and understand their approach.

Note: Materialized Views has nothing to do with the Views module. It is a technical term (industry wide) for what the module does.

Matt Farina
www.innovatingtomorrow.net
www.geeksandgod.com
www.superaveragepodcast.com
www.mattfarina.com

Unproven as a general solution

eaton's picture

It's important to understand that Materialized Views as discussed in this issue are not handled at the database level. Oracle offers 'Materialized Views', and many database systems offer 'Views' that present several tables as a single one for the purpose of selects. However, the solution proposed in the sprint does not use those database level mechanisms, and instead relies on traditional event-driven Drupal mechanisms in PHP to maintain the 'materialized' tables with multiple queries.

In other words, when you save a node it will insert the node, insert each field, then gather up all the information and insert the 'materialized' records into any materialized views that have been defined. Then, when the time comes to run a select query, you decide whether to use the individual field-specific tables or the big "flattened out" table.

This mechanism has been used to great effect in specific hot-spot situations where the results of one or two complex queries are crushing the site; the optimizations for the tracker here on drupal.org and several of the expensive searches on fastcompany.com stand out. The biggest concerns here are that:

  1. It will now be an absolute requirement for reasonable performance: with just four fields, a 65% slowdown in 'unmaterialized' performance is cataclysmic.
  2. Because the feature is impemented at the Drupal level rather than the database-native level, it WILL put more work on the database during inserts.
  3. On complex sites with a large number of queries, materialized views will require either near-constant updates when any element of the view changes, or will require that materialized views for each set of queries be maintained with slightly different slices of data.
  4. It's an additional non-trivial layer of complexity on top of the relatively simple concept of 'loading stuff from the database', at a time when our developer experience is already skyrocketing in complexity.

All of these concerns may vanish when Materialized Views ... well, materializes :-) But the system that has been proposed and explained isn't over the heads of the folks who are concerned: we've seen stuff like this before in other projects, and in some cases even implemented code like it. Like aggressive caching, the approach is a very useful technique for "cooling off" specific hot spots, but can't be used to ignore serious system-wide performance degradation.

When the code arrives, we can of course benchmark it in a variety of circumstances and see how it holds up compared to the current offerings. However, when we're still in the 'choosing our course of action' phase it's important to look at the pros and cons of a complex new core subsystem.

Good Details

mfer's picture

Thanks for the good details, Jeff. I'm looking forward to seeing tests on large and small data sets for this. If materialized views aren't the right course of action it would be good to know well in advance and restructure the data storage mechanism as soon as possible.

I know that it's not over the heads of everyone concerned. I explained what I was thinking poorly. If only you folks could see what's really going on in my head. :)

I'm starting to wonder if there should be a separation between the field definitions/CRUD and the storage mechanism. Then per field storage could happen, CCK style storage could happen, or storage in something entirely different could happen. hrm...

Matt Farina
www.innovatingtomorrow.net
www.geeksandgod.com
www.superaveragepodcast.com
www.mattfarina.com

I'm sharing Jeff's concerns

alex_b's picture

I'm sharing Jeff's concerns.

Doing a lot of aggregation work, I care about the cost of writing content just as much as about the cost of reading it.

I know I am late in the discussion, but I'm paying more attention now as I see that there is a large camp forming around per field data storage. Given the complex implications for performance and functionality - could it not be the site builder's responsibility to decide whether a field's storage is going to be shared or not for the field's entire life?

http://www.twitter.com/lx_barth

My responses

David Strauss's picture

My responses:
(1) This is a false dilemma. You can always choose your level of materialization, and you have more choices than (1) no materialization and (2) creating a perfectly tailored MV for each query. You could, for example, materialize to combine single-valued fields together by node type. The current CCK model tries to be "one size fits all" and demonstrably fails to scale for large sites.

(2) We have a few approaches to indexing data into materialized views, one of which greatly mitigates real-time performance concerns.

Currently, the MV code queues items for updates on save and then indexes the queued items at the end of the request. This is necessary to allow data like taxonomy fields to be fully interpreted and stored before MV reads the data. (There is no consistent, elegant way to get the term IDs from a node at save time.)

An alternative approach is to queue for indexing offline, just as we currently do for search. This has a minimal impact on real-time save performance at the cost of slightly stale data. The degree of staleness depends on cron run frequency. If you use a table engine with efficient locking, offline indexing puts limited load on the system.

(3) Dynamic filters in MV are designed to allow using one MV to satisfy multiple queries by "slicing" the data in one MV many ways. You can also add arbitrary indexes and columns to MVs to generalize their value. Complex sites should have a significant number of queries sharing a single MV.

(4) The current solution of inconsistent canonical storage is indisputably harder to code against for basic use.

The only reason (4) above is

merlinofchaos's picture

The only reason (4) above is true is because it can change at will, which is the part that people object to. There is no problem in just forcing it to be locked at creation time and if you want to change it, you migrate your data.

don't share...

jredding's picture

I'm sorry but I don't share your same level of optimism.

eaton list a good number of reasons to be concerned (shift from DB level to PHP level, increased DB usage, etc.) so I won't relist them.

Knowing many of the developers working on this I do trust that they are smart and that they create good code. I honestly do. But this approach has been tried before in other systems and has failed miserably. I also don't think that most devs don't know what materialized views are, I think they do. In fact I think most devs have tried to implement these in other projects and found problems when they try to shift DB work up to the application layer.

While I understand this approach (flexibility, ease of use, ease of installation) it does somewhat shock me to think that this is a path Drupal is taking when there are quite a number of people that continue to push for moving certain items into C code to increase performance remember that you can only go so far in PHP. Moreover materialized views are being implemented in PHP which is going to be slower than at the DB level (a few triggers and a bunch of code).

I have concerns, that part is known, but I also have patience. So while I trust that the developers are smart and do know Drupal inside and out I do not trust that this is the right solution for Drupal as least not yet.

I'll wait and see what the code produces. Hopefully I'll eat my hat and all of the devs working on this can laugh at me for being concerned.

-Jacob Redding

-Jacob Redding

Allow native DB views

ogi's picture

I would welcome ability to choose (at least for testing purposes) between

I would like to see performance comparison between these options on tuned PostgreSQL.

PostgreSQL

ogi's picture

Please repeat the test with PostgreSQL. Folklore is that PostgreSQL handles JOINs much better than MySQL and it would be valueable to verify what's the penalty in this case. (SQLite is worth considering too but I don't know if it's good on JOINs.)

Case closed (?)

bjaspan's picture

I implemented the Per-Bundle Storage module which, I think, puts this issue to bed. Please see http://groups.drupal.org/node/18302.

D7 Field concept cannot work

bg1's picture

Sorry to be coming to this discussion so late.

I am shocked by Drupal 7 data storage concept to store data by field rather than by content type.
I my database has several hundred content types and many have > 100 very dynamic fields (meaning some data in each table row changes every few hours or even every few minutes). Some tables have > 1 million rows (including some with > 100 columns). I have several problems with the new architecture:

  1. I cannot even begin to imagine the performance when accessing an object and having to join 40 - 50 tables to get the result of one table. The alternative of having dynamic updating of some "materialized" view is unrealistic as the triggers necessary to do this would have the database in a constant thrashing mode.
  2. most of our application consists of modules where we use our own SQL statements to retrieve our data by joining whatever content types we need to retrieve the results we need (many hundreds of queries for the various combinations of data we needs) . It is inconceivable that we would have to join 20 or 30 more tables to retrieve the data we need for each query.

I do not understand the logic of why anyone would want a field -based solution (unless they do not understand database architecture or have a very simple data structure). I particularly do not understand what is meant by "shared field". Our database is fully normalized with the exception of a few Report Oriented tables that are periodically generated for optimized reporting purposes.

I have a feeling that Drupal is moving away from application developers and focusing on brochure-ware developers. But from everything I have heard there are already other solutions that are much better for brochure-ware sites such as Wordpress.

I seems to me that the solution would be to enable developers to chose between field-based or content-type based data model (on a content-type by content-type basis).

Can anyone explain to me the rational for this change? Can anyone tell me who is in support of the fields oriented approach? I would like to hear what they were thinking.

"I cannot even begin to

David Strauss's picture

"I cannot even begin to imagine the performance when accessing an object and having to join 40 - 50 tables to get the result of one table."

Neither can I, especially because the right way to query is to only JOIN the tables you need to get the node IDs to load. Next, you do a multi-load of those nodes. You should never need more JOINs than the number of conditions in your query.

If you still can't get the performance you need, use a back-end like MongoDB, which will provide much higher-performance storage and querying. (Or even write your own back-end if you want, which includes the option of writing one that stores data the old way, like the Per-Bundle Storage module.)

"It is inconceivable that we would have to join 20 or 30 more tables to retrieve the data we need for each query."

So, don't. Query for the node IDs you need and do a multi-load on the ones you need to display. The old method of querying everything you needed to filter and display the data wasn't fast, anyway.

"I have a feeling that Drupal is moving away from application developers and focusing on brochure-ware developers. But from everything I have heard there are already other solutions that are much better for brochure-ware sites such as Wordpress."

No, it's just moving away from people thinking that all data belongs in relational databases. Our modular field storage has made sides like Examiner.com possible, which would not have been the case with the old design.

"Can anyone explain to me the rational for this change?"

Please read the discussions that have publicly happened here and on Drupal.org before demanding an explanation.

"Can anyone tell me who is in support of the fields oriented approach?"

The people behind building the largest Drupal sites in the world.

Just to update this discussion...

David Strauss's picture
  • Views in D7 now only JOINs tables necessary to identify the entities to display. It then bulk-loads the entities and displays them. This means the number of JOINs is never more than the number of conditions on the query. So, the only cost versus the D6 CCK model is when applying two conditions to a View that both live in the same table; D6 CCK would not require a JOIN but D7 Field API would.
  • Examiner.com added a restricted (no JOINs across content types) but highly useful field query engine that runs very efficiently on field back-ends like MongoDB.

Thank you, but here is our scenario

bg1's picture

Thank you for the response. I really mean it. I was really hoping to be able to find some people with some understanding of the issue so that I could gain some understanding.

Here is a concrete example from our shipping site. We have thousands of orders each day hitting our shipping department for thousands of different products that vary greatly in size and weight and special packaging requirements. These are orders from resellers, so we often have many orders for the same customer that just happened to have been allocated on that day. Each customer has different rules as to what we should do regarding how we should ship their orders, when we should consolidate shipments, etc. Depending on their profile, will determine what carrier and freight account has to be used for that shipment (customer's 3rd party acct or ours). Then, depending on the customer's profile wishes and the carrier's requirements and the characteristics of the products involved, we have to determine what is going into which shipments and then figure out how we are going to pack the goods into cartons for the shipment and then how we are going to label the cartons and the shipments, create pack slips (with custom labeling, etc.), Bills of Lading, etc. Doing this requires retrieving data from more than 200 fields from around 20 tables that have to be joined based on many criteria. In order to be able to process up to 10,000 shipments in a day per user (including dealing with needs to have user intervention to modify shipment service levels, custom recartonization, shipment routing through distribution centers in some cases, etc.), we do set processing on almost everything. That means we usually process hundreds of order/shipments per mouse click. The following content types are involved:

Product
Product Weights and Dimensions (because a product could come prepacked for various quantities)
Kit (to get kitting rules if applicable)
Customer
Carrier
Freight Account (xref between customer and carrier with various conditions to be adhered to)
Shipment
Shipment Line
Shipment Group (Groups of shipments tend to be processed as a single shipment and then broken up into individual shipments at the last moment for bulk processing)
Order
Order Line
Carton
Carton Line
Carton Type (has information about size and the weight of the empty box, etc.)
PackSlip
PackSlipLine
FrtSvcLevel (pick up the rules for that service level with regard to size and wt of shipment)
FrtZones
FrtRates
ASN Header
ASN Dtl Segment
Special Packing (cases where special packaging is required for fragile items, etc.)

And the above list is not complete by any means. In reality, this is not one query, but close to a hundred queries that are fired in specific sequences.

I was close to having this figured out with the content type (we would not have used Drupal's CCK interface to build or Views to retrieve [in most cases], but would have used node to enable comments and other information to be attached to the various objects as they flowed through the system). However, we would have created the data equivalent to having used CCK and we would have used views for many of the human interfaces for managers or customers, etc.

This works great as a web app and we have been using it for years (shippers can go to any warehouse locations with some computers and printers and be up and operating within hours). But is Drupal 7's data structure really viable for such applications?

Additional Thought

bg1's picture

Just for information. I am primarily a back-end guy. That is why Drupal is so appealing. I want something that will make it easy to make front ends for our data.

In a previous life I actually coded much of the kernel of a DBMS. I therefore am very conscious of all the layers of code right down to retrieving data from the magnetic or other medium and putting it back. Whatever the architecture, I tend to translate the layers down through index structures, page and page buffer management, physical reads and writes, etc. I am also very performance focused. I would rather have a developer spend 10 times as long developing to gain a few seconds in response time or overall throughput.

I realize we might be talking about 2 very different worlds here. If I was developing a brochure-site where I expect hundreds or thousands of views per entry or update, I would have one approach. On the other hand, if I expect hundreds or thousands of entries/updates per user access then needs are very different. As with many things in computing, these worlds do not always stay neatly separated. So now I need to have users attach and retrieve content to OLTP transaction objects.

But I have only my own learning and experiences to fall back on and I am always interested in what others are doing and why. It is those people who have broken away from the "until now" practices that have moved technology forward overall.

Fields in Core

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds:

Hot content this week