If its a lookup you will need a simple trigger to do this in real time
trigger onParentObjectDelete on CustomObject__c (before delete){
List<Id> idsToQuery = new List<Id>{};
for(CustomObject__c a: Trigger.old){
idsToQuery.add(a.id);
}
//query all child records where parent ids were deleted
ChildObject__c[] objsToDelete = [select id from ChildObject__c where ParentId__c IN :idsToQuery];
delete objsToDelete; //perform delete statement
}
This is if you are planning to do a real time delete .Also in this case be sure to check with business what if someone undeletes a record from the bin.May be a trigger to handle this as well
Also if you don't need a real time delete ,a batch process to fetch orphaned childs(ParentId is null) and delete .
While you can go up 5 levels in the child to parent direction, you can only go down 1 level in parent to child relationships. See the "Understanding Relationship Query Limitations" section towards the end of the Relationship Queries documentation.
So you will need to use two separate queries. Here is one way to do that (might contain typos):
Map<Id, Child__c> children = new Map<Id, Child__c>([
Select Id, Name, Contact__c, FieldA__c
from Child__c
where Contact__c = :name
]);
Map<Id, List<GrandChild__c>> grandChildren = new Map<Id, List<GrandChild__c>>;
for (GrandChild__c grandChild : [
Select id, Name, Child__c, FieldB__c,
(Select id, Name, GrandChild__c, FieldC__c from GreatGrandChild__r)
from GrandChild__c
where Child__c in :children.keySet()
]) {
List<GrandChild__c> l = grandChildren.get(grandChild.Child__c);
if (l == null) {
l = new List<GrandChild__c>;
grandChildren.put(grandChild.Child__c, l);
}
l.add(grandChild);
}
Set<Id> childIds = new Set<Id>();
for(Child__c child : children.values()) {
List<GrandChild__c> l = grandChildren.get(child.Id);
if (l = null) {
l = new List<GrandChild__c>();
}
// Wrapper now has extra argument and field, the list of grand children with the
// great grand children available in each grand child's GreatGrandChild__r field
wrapList.add(new Wrapper(false, child, l));
}
You will need to check (or just try a few likely values) the name GreatGrandChild__r; normally this would be a plural so it might be GreatGrandChilds__r or GreatGrandChildren__r - that is a choice the person creating the data model makes.
Best Answer
There is a good practice guideline of 10,000 child records per parent.
The reason for this is that 10,000 is the point that Salesforce has established for describing a situation as having "data skew", a condition where the inherent locking and sharing mechanisms of the platform start to break down by causing increased incidence of row locking errors and degraded performance. Specifically, the situation you're asking about would be called "parent-child data skew" if more than 10,000 child objects existed on a parent record.
It's not a hard and fast thing. It's possible to have more than 10,000 children on a single parent with no negative outcomes. However, that setup massively increases the likelihood of encountering specific classes of problems - row locking errors with significant add and update operations, or sharing-related performance impact (depending on the specific sharing defaults and rules involved).
See Designing Record Access for Enterprise Scale for more, as well as Managing Lookup Skew in Salesforce to Avoid Record Lock Exceptions, or search SFSE or other Salesforce forums for "data skew" for lots of discussion.