Forum Discussion

DavidHalter1's avatar
DavidHalter1
Qrew Member
3 years ago

Pipeline efficiency -- one or many?

Hi:

I'm just getting started in pipelines. I've got the basic concept. One question I have is the most efficient options for some of my pipeline ideas for use in the Quickbase channel.

I have essentially one trigger (record created) that will then add up to 5 "result" records in a separate table.  Each "result" is slightly different, but the bulk of the information remains the same (out of currently about 40 fields, only 4 change in each of those up to 5 "results").  I anticipate the pipeline will usually create 1-3 records, with the average close to 2 or a little under.

Later on, for each of those possible 5 result records, I'll need pipelines to handle updates and deletions.

Dealing only with the create record pipeline for this question, is it going to be more efficient (on the app) to write 1 pipeline that triggers on the Record Created and then use If/then statements  to create up to 5 result records?

Or would it be more efficient to have 5 pipelines that each trigger slightly differently (based on the conditionals that would otherwise be used in only a single pipeline)?

I'm assuming that with the answer provided I can extrapolate to pipelines to deal with Record Updated triggers?

Record Deleted trigger is just a single pipeline because it is just delete all associated records without any conditionals or any additional loops apart from the initial search, so that won't change any.

I can tell you from the building a pipeline standpoint, making 5 pipelines seems more efficient, simpler, and therefore more accurate than making 1 long pipeline to handle all 5 cases. And it seems like I could handle much of the duplication necessary with duplication of entire pipeline from the pipelines dashboard or with export/import of YAML files if need be.

Thanks for the input,
Dave

------------------------------
David Halter
------------------------------
  • Hi David,

    To go along with Mark's statement for a Pipeline if they are a smaller Pipeline there usually isn't a big impact on performance for nesting them together. As they get longer breaking them up can be a bit more efficient as they will then process on the Pipelines side. That doesn't necessarily mean you will see a big difference in performance either way right out the gate depending on your data set size and how much raw data you are moving. That isn't something I worry too much about until I'm getting into a Pipeline that might be pretty long or might have a lot of conditions in it. Same thing for if I'm expecting the amount of data to really start to spike up.

    ------------------------------
    Evan Martinez
    ------------------------------
  • If it was me and mentally this is one process then I would keep it in one pipeline. I wouldn't concern myself with nuances of which might be infinitesimally less of a load on some AWS server somewhere.

    ------------------------------
    Mark Shnier (YQC)
    mark.shnier@gmail.com
    ------------------------------