I have a query that has started failing due to the fact that it has over 400M rows. Although there are joins and fuzzy matching (LIKE '%XXXX%'
) it is still failing. Using NO LOCK
too
I have used staging tables to group the joins and likes but but the real bottle neck is the 400 M table.
I have thought of breaking the table down and joining them back but not sure if this isnt even a worse way to approach this.
I will appreciate guidance on this
Best Answer
There is a hidden, indexed field in every data called
_customObjectKey
and it's fast. While it seems counter-intuitive, you can leverage it in your queries that are timing out by adding an additional join:NOTE: The
_customObjectKey
may not be sequential if the DE has been updated with another query.There are some other things that I've outlined in a post on my Troubleshooting Queries in SFMC blog post: