1 /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
2 /* vim: set ts=8 sts=2 et sw=2 tw=80: */
3 /* This Source Code Form is subject to the terms of the Mozilla Public
4 * License, v. 2.0. If a copy of the MPL was not distributed with this
5 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
6
7 //
8 // This file implements a garbage-cycle collector based on the paper
9 //
10 // Concurrent Cycle Collection in Reference Counted Systems
11 // Bacon & Rajan (2001), ECOOP 2001 / Springer LNCS vol 2072
12 //
13 // We are not using the concurrent or acyclic cases of that paper; so
14 // the green, red and orange colors are not used.
15 //
16 // The collector is based on tracking pointers of four colors:
17 //
18 // Black nodes are definitely live. If we ever determine a node is
19 // black, it's ok to forget about, drop from our records.
20 //
21 // White nodes are definitely garbage cycles. Once we finish with our
22 // scanning, we unlink all the white nodes and expect that by
23 // unlinking them they will self-destruct (since a garbage cycle is
24 // only keeping itself alive with internal links, by definition).
25 //
26 // Snow-white is an addition to the original algorithm. Snow-white object
27 // has reference count zero and is just waiting for deletion.
28 //
29 // Grey nodes are being scanned. Nodes that turn grey will turn
30 // either black if we determine that they're live, or white if we
31 // determine that they're a garbage cycle. After the main collection
32 // algorithm there should be no grey nodes.
33 //
34 // Purple nodes are *candidates* for being scanned. They are nodes we
35 // haven't begun scanning yet because they're not old enough, or we're
36 // still partway through the algorithm.
37 //
38 // XPCOM objects participating in garbage-cycle collection are obliged
39 // to inform us when they ought to turn purple; that is, when their
40 // refcount transitions from N+1 -> N, for nonzero N. Furthermore we
41 // require that *after* an XPCOM object has informed us of turning
42 // purple, they will tell us when they either transition back to being
43 // black (incremented refcount) or are ultimately deleted.
44
45 // Incremental cycle collection
46 //
47 // Beyond the simple state machine required to implement incremental
48 // collection, the CC needs to be able to compensate for things the browser
49 // is doing during the collection. There are two kinds of problems. For each
50 // of these, there are two cases to deal with: purple-buffered C++ objects
51 // and JS objects.
52
53 // The first problem is that an object in the CC's graph can become garbage.
54 // This is bad because the CC touches the objects in its graph at every
55 // stage of its operation.
56 //
57 // All cycle collected C++ objects that die during a cycle collection
58 // will end up actually getting deleted by the SnowWhiteKiller. Before
59 // the SWK deletes an object, it checks if an ICC is running, and if so,
60 // if the object is in the graph. If it is, the CC clears mPointer and
61 // mParticipant so it does not point to the raw object any more. Because
62 // objects could die any time the CC returns to the mutator, any time the CC
63 // accesses a PtrInfo it must perform a null check on mParticipant to
64 // ensure the object has not gone away.
65 //
66 // JS objects don't always run finalizers, so the CC can't remove them from
67 // the graph when they die. Fortunately, JS objects can only die during a GC,
68 // so if a GC is begun during an ICC, the browser synchronously finishes off
69 // the ICC, which clears the entire CC graph. If the GC and CC are scheduled
70 // properly, this should be rare.
71 //
72 // The second problem is that objects in the graph can be changed, say by
73 // being addrefed or released, or by having a field updated, after the object
74 // has been added to the graph. The problem is that ICC can miss a newly
75 // created reference to an object, and end up unlinking an object that is
76 // actually alive.
77 //
78 // The basic idea of the solution, from "An on-the-fly Reference Counting
79 // Garbage Collector for Java" by Levanoni and Petrank, is to notice if an
80 // object has had an additional reference to it created during the collection,
81 // and if so, don't collect it during the current collection. This avoids having
82 // to rerun the scan as in Bacon & Rajan 2001.
83 //
84 // For cycle collected C++ objects, we modify AddRef to place the object in
85 // the purple buffer, in addition to Release. Then, in the CC, we treat any
86 // objects in the purple buffer as being alive, after graph building has
87 // completed. Because they are in the purple buffer, they will be suspected
88 // in the next CC, so there's no danger of leaks. This is imprecise, because
89 // we will treat as live an object that has been Released but not AddRefed
90 // during graph building, but that's probably rare enough that the additional
91 // bookkeeping overhead is not worthwhile.
92 //
93 // For JS objects, the cycle collector is only looking at gray objects. If a
94 // gray object is touched during ICC, it will be made black by UnmarkGray.
95 // Thus, if a JS object has become black during the ICC, we treat it as live.
96 // Merged JS zones have to be handled specially: we scan all zone globals.
97 // If any are black, we treat the zone as being black.
98
99
100 // Safety
101 //
102 // An XPCOM object is either scan-safe or scan-unsafe, purple-safe or
103 // purple-unsafe.
104 //
105 // An nsISupports object is scan-safe if:
106 //
107 // - It can be QI'ed to |nsXPCOMCycleCollectionParticipant|, though
108 // this operation loses ISupports identity (like nsIClassInfo).
109 // - Additionally, the operation |traverse| on the resulting
110 // nsXPCOMCycleCollectionParticipant does not cause *any* refcount
111 // adjustment to occur (no AddRef / Release calls).
112 //
113 // A non-nsISupports ("native") object is scan-safe by explicitly
114 // providing its nsCycleCollectionParticipant.
115 //
116 // An object is purple-safe if it satisfies the following properties:
117 //
118 // - The object is scan-safe.
119 //
120 // When we receive a pointer |ptr| via
121 // |nsCycleCollector::suspect(ptr)|, we assume it is purple-safe. We
122 // can check the scan-safety, but have no way to ensure the
123 // purple-safety; objects must obey, or else the entire system falls
124 // apart. Don't involve an object in this scheme if you can't
125 // guarantee its purple-safety. The easiest way to ensure that an
126 // object is purple-safe is to use nsCycleCollectingAutoRefCnt.
127 //
128 // When we have a scannable set of purple nodes ready, we begin
129 // our walks. During the walks, the nodes we |traverse| should only
130 // feed us more scan-safe nodes, and should not adjust the refcounts
131 // of those nodes.
132 //
133 // We do not |AddRef| or |Release| any objects during scanning. We
134 // rely on the purple-safety of the roots that call |suspect| to
135 // hold, such that we will clear the pointer from the purple buffer
136 // entry to the object before it is destroyed. The pointers that are
137 // merely scan-safe we hold only for the duration of scanning, and
138 // there should be no objects released from the scan-safe set during
139 // the scan.
140 //
141 // We *do* call |Root| and |Unroot| on every white object, on
142 // either side of the calls to |Unlink|. This keeps the set of white
143 // objects alive during the unlinking.
144 //
145
146 #if !defined(__MINGW32__)
147 #ifdef WIN32
148 #include <crtdbg.h>
149 #include <errno.h>
150 #endif
151 #endif
152
153 #include "base/process_util.h"
154
155 #include "mozilla/ArrayUtils.h"
156 #include "mozilla/AutoRestore.h"
157 #include "mozilla/CycleCollectedJSContext.h"
158 #include "mozilla/DebugOnly.h"
159 #include "mozilla/HoldDropJSObjects.h"
160 /* This must occur *after* base/process_util.h to avoid typedefs conflicts. */
161 #include "mozilla/LinkedList.h"
162 #include "mozilla/MemoryReporting.h"
163 #include "mozilla/SegmentedVector.h"
164
165 #include "nsCycleCollectionParticipant.h"
166 #include "nsCycleCollectionNoteRootCallback.h"
167 #include "nsDeque.h"
168 #include "nsCycleCollector.h"
169 #include "nsThreadUtils.h"
170 #include "nsXULAppAPI.h"
171 #include "prenv.h"
172 #include "nsPrintfCString.h"
173 #include "nsTArray.h"
174 #include "nsIConsoleService.h"
175 #include "mozilla/Attributes.h"
176 #include "nsICycleCollectorListener.h"
177 #include "nsIMemoryReporter.h"
178 #include "nsIFile.h"
179 #include "nsDumpUtils.h"
180 #include "xpcpublic.h"
181 #include "GeckoProfiler.h"
182 #include <stdint.h>
183 #include <stdio.h>
184
185 #include "mozilla/AutoGlobalTimelineMarker.h"
186 #include "mozilla/Likely.h"
187 #include "mozilla/PoisonIOInterposer.h"
188 #include "mozilla/Telemetry.h"
189 #include "mozilla/ThreadLocal.h"
190
191 #ifdef MOZ_CRASHREPORTER
192 #include "nsExceptionHandler.h"
193 #endif
194
195 using namespace mozilla;
196
197 //#define COLLECT_TIME_DEBUG
198
199 // Enable assertions that are useful for diagnosing errors in graph construction.
200 //#define DEBUG_CC_GRAPH
201
202 #define DEFAULT_SHUTDOWN_COLLECTIONS 5
203
204 // One to do the freeing, then another to detect there is no more work to do.
205 #define NORMAL_SHUTDOWN_COLLECTIONS 2
206
207 // Cycle collector environment variables
208 //
209 // MOZ_CC_LOG_ALL: If defined, always log cycle collector heaps.
210 //
211 // MOZ_CC_LOG_SHUTDOWN: If defined, log cycle collector heaps at shutdown.
212 //
213 // MOZ_CC_LOG_THREAD: If set to "main", only automatically log main thread
214 // CCs. If set to "worker", only automatically log worker CCs. If set to "all",
215 // log either. The default value is "all". This must be used with either
216 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
217 //
218 // MOZ_CC_LOG_PROCESS: If set to "main", only automatically log main process
219 // CCs. If set to "content", only automatically log tab CCs. If set to
220 // "plugins", only automatically log plugin CCs. If set to "all", log
221 // everything. The default value is "all". This must be used with either
222 // MOZ_CC_LOG_ALL or MOZ_CC_LOG_SHUTDOWN for it to do anything.
223 //
224 // MOZ_CC_ALL_TRACES: If set to "all", any cycle collector
225 // logging done will be WantAllTraces, which disables
226 // various cycle collector optimizations to give a fuller picture of
227 // the heap. If set to "shutdown", only shutdown logging will be WantAllTraces.
228 // The default is none.
229 //
230 // MOZ_CC_RUN_DURING_SHUTDOWN: In non-DEBUG or builds, if this is set,
231 // run cycle collections at shutdown.
232 //
233 // MOZ_CC_LOG_DIRECTORY: The directory in which logs are placed (such as
234 // logs from MOZ_CC_LOG_ALL and MOZ_CC_LOG_SHUTDOWN, or other uses
235 // of nsICycleCollectorListener)
236
237 // Various parameters of this collector can be tuned using environment
238 // variables.
239
240 struct nsCycleCollectorParams
241 {
242 bool mLogAll;
243 bool mLogShutdown;
244 bool mAllTracesAll;
245 bool mAllTracesShutdown;
246 bool mLogThisThread;
247
nsCycleCollectorParamsnsCycleCollectorParams248 nsCycleCollectorParams() :
249 mLogAll(PR_GetEnv("MOZ_CC_LOG_ALL") != nullptr),
250 mLogShutdown(PR_GetEnv("MOZ_CC_LOG_SHUTDOWN") != nullptr),
251 mAllTracesAll(false),
252 mAllTracesShutdown(false)
253 {
254 const char* logThreadEnv = PR_GetEnv("MOZ_CC_LOG_THREAD");
255 bool threadLogging = true;
256 if (logThreadEnv && !!strcmp(logThreadEnv, "all")) {
257 if (NS_IsMainThread()) {
258 threadLogging = !strcmp(logThreadEnv, "main");
259 } else {
260 threadLogging = !strcmp(logThreadEnv, "worker");
261 }
262 }
263
264 const char* logProcessEnv = PR_GetEnv("MOZ_CC_LOG_PROCESS");
265 bool processLogging = true;
266 if (logProcessEnv && !!strcmp(logProcessEnv, "all")) {
267 switch (XRE_GetProcessType()) {
268 case GeckoProcessType_Default:
269 processLogging = !strcmp(logProcessEnv, "main");
270 break;
271 case GeckoProcessType_Plugin:
272 processLogging = !strcmp(logProcessEnv, "plugins");
273 break;
274 case GeckoProcessType_Content:
275 processLogging = !strcmp(logProcessEnv, "content");
276 break;
277 default:
278 processLogging = false;
279 break;
280 }
281 }
282 mLogThisThread = threadLogging && processLogging;
283
284 const char* allTracesEnv = PR_GetEnv("MOZ_CC_ALL_TRACES");
285 if (allTracesEnv) {
286 if (!strcmp(allTracesEnv, "all")) {
287 mAllTracesAll = true;
288 } else if (!strcmp(allTracesEnv, "shutdown")) {
289 mAllTracesShutdown = true;
290 }
291 }
292 }
293
LogThisCCnsCycleCollectorParams294 bool LogThisCC(bool aIsShutdown)
295 {
296 return (mLogAll || (aIsShutdown && mLogShutdown)) && mLogThisThread;
297 }
298
AllTracesThisCCnsCycleCollectorParams299 bool AllTracesThisCC(bool aIsShutdown)
300 {
301 return mAllTracesAll || (aIsShutdown && mAllTracesShutdown);
302 }
303 };
304
305 #ifdef COLLECT_TIME_DEBUG
306 class TimeLog
307 {
308 public:
TimeLog()309 TimeLog() : mLastCheckpoint(TimeStamp::Now())
310 {
311 }
312
313 void
Checkpoint(const char * aEvent)314 Checkpoint(const char* aEvent)
315 {
316 TimeStamp now = TimeStamp::Now();
317 double dur = (now - mLastCheckpoint).ToMilliseconds();
318 if (dur >= 0.5) {
319 printf("cc: %s took %.1fms\n", aEvent, dur);
320 }
321 mLastCheckpoint = now;
322 }
323
324 private:
325 TimeStamp mLastCheckpoint;
326 };
327 #else
328 class TimeLog
329 {
330 public:
TimeLog()331 TimeLog()
332 {
333 }
Checkpoint(const char * aEvent)334 void Checkpoint(const char* aEvent)
335 {
336 }
337 };
338 #endif
339
340
341 ////////////////////////////////////////////////////////////////////////
342 // Base types
343 ////////////////////////////////////////////////////////////////////////
344
345 struct PtrInfo;
346
347 class EdgePool
348 {
349 public:
350 // EdgePool allocates arrays of void*, primarily to hold PtrInfo*.
351 // However, at the end of a block, the last two pointers are a null
352 // and then a void** pointing to the next block. This allows
353 // EdgePool::Iterators to be a single word but still capable of crossing
354 // block boundaries.
355
EdgePool()356 EdgePool()
357 {
358 mSentinelAndBlocks[0].block = nullptr;
359 mSentinelAndBlocks[1].block = nullptr;
360 }
361
~EdgePool()362 ~EdgePool()
363 {
364 MOZ_ASSERT(!mSentinelAndBlocks[0].block &&
365 !mSentinelAndBlocks[1].block,
366 "Didn't call Clear()?");
367 }
368
Clear()369 void Clear()
370 {
371 EdgeBlock* b = EdgeBlocks();
372 while (b) {
373 EdgeBlock* next = b->Next();
374 delete b;
375 b = next;
376 }
377
378 mSentinelAndBlocks[0].block = nullptr;
379 mSentinelAndBlocks[1].block = nullptr;
380 }
381
382 #ifdef DEBUG
IsEmpty()383 bool IsEmpty()
384 {
385 return !mSentinelAndBlocks[0].block &&
386 !mSentinelAndBlocks[1].block;
387 }
388 #endif
389
390 private:
391 struct EdgeBlock;
392 union PtrInfoOrBlock
393 {
394 // Use a union to avoid reinterpret_cast and the ensuing
395 // potential aliasing bugs.
396 PtrInfo* ptrInfo;
397 EdgeBlock* block;
398 };
399 struct EdgeBlock
400 {
401 enum { EdgeBlockSize = 16 * 1024 };
402
403 PtrInfoOrBlock mPointers[EdgeBlockSize];
EdgeBlockEdgePool::EdgeBlock404 EdgeBlock()
405 {
406 mPointers[EdgeBlockSize - 2].block = nullptr; // sentinel
407 mPointers[EdgeBlockSize - 1].block = nullptr; // next block pointer
408 }
NextEdgePool::EdgeBlock409 EdgeBlock*& Next()
410 {
411 return mPointers[EdgeBlockSize - 1].block;
412 }
StartEdgePool::EdgeBlock413 PtrInfoOrBlock* Start()
414 {
415 return &mPointers[0];
416 }
EndEdgePool::EdgeBlock417 PtrInfoOrBlock* End()
418 {
419 return &mPointers[EdgeBlockSize - 2];
420 }
421 };
422
423 // Store the null sentinel so that we can have valid iterators
424 // before adding any edges and without adding any blocks.
425 PtrInfoOrBlock mSentinelAndBlocks[2];
426
EdgeBlocks()427 EdgeBlock*& EdgeBlocks()
428 {
429 return mSentinelAndBlocks[1].block;
430 }
EdgeBlocks() const431 EdgeBlock* EdgeBlocks() const
432 {
433 return mSentinelAndBlocks[1].block;
434 }
435
436 public:
437 class Iterator
438 {
439 public:
Iterator()440 Iterator() : mPointer(nullptr) {}
Iterator(PtrInfoOrBlock * aPointer)441 explicit Iterator(PtrInfoOrBlock* aPointer) : mPointer(aPointer) {}
Iterator(const Iterator & aOther)442 Iterator(const Iterator& aOther) : mPointer(aOther.mPointer) {}
443
operator ++()444 Iterator& operator++()
445 {
446 if (!mPointer->ptrInfo) {
447 // Null pointer is a sentinel for link to the next block.
448 mPointer = (mPointer + 1)->block->mPointers;
449 }
450 ++mPointer;
451 return *this;
452 }
453
operator *() const454 PtrInfo* operator*() const
455 {
456 if (!mPointer->ptrInfo) {
457 // Null pointer is a sentinel for link to the next block.
458 return (mPointer + 1)->block->mPointers->ptrInfo;
459 }
460 return mPointer->ptrInfo;
461 }
operator ==(const Iterator & aOther) const462 bool operator==(const Iterator& aOther) const
463 {
464 return mPointer == aOther.mPointer;
465 }
operator !=(const Iterator & aOther) const466 bool operator!=(const Iterator& aOther) const
467 {
468 return mPointer != aOther.mPointer;
469 }
470
471 #ifdef DEBUG_CC_GRAPH
Initialized() const472 bool Initialized() const
473 {
474 return mPointer != nullptr;
475 }
476 #endif
477
478 private:
479 PtrInfoOrBlock* mPointer;
480 };
481
482 class Builder;
483 friend class Builder;
484 class Builder
485 {
486 public:
Builder(EdgePool & aPool)487 explicit Builder(EdgePool& aPool)
488 : mCurrent(&aPool.mSentinelAndBlocks[0])
489 , mBlockEnd(&aPool.mSentinelAndBlocks[0])
490 , mNextBlockPtr(&aPool.EdgeBlocks())
491 {
492 }
493
Mark()494 Iterator Mark()
495 {
496 return Iterator(mCurrent);
497 }
498
Add(PtrInfo * aEdge)499 void Add(PtrInfo* aEdge)
500 {
501 if (mCurrent == mBlockEnd) {
502 EdgeBlock* b = new EdgeBlock();
503 *mNextBlockPtr = b;
504 mCurrent = b->Start();
505 mBlockEnd = b->End();
506 mNextBlockPtr = &b->Next();
507 }
508 (mCurrent++)->ptrInfo = aEdge;
509 }
510 private:
511 // mBlockEnd points to space for null sentinel
512 PtrInfoOrBlock* mCurrent;
513 PtrInfoOrBlock* mBlockEnd;
514 EdgeBlock** mNextBlockPtr;
515 };
516
SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const517 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
518 {
519 size_t n = 0;
520 EdgeBlock* b = EdgeBlocks();
521 while (b) {
522 n += aMallocSizeOf(b);
523 b = b->Next();
524 }
525 return n;
526 }
527 };
528
529 #ifdef DEBUG_CC_GRAPH
530 #define CC_GRAPH_ASSERT(b) MOZ_ASSERT(b)
531 #else
532 #define CC_GRAPH_ASSERT(b)
533 #endif
534
535 #define CC_TELEMETRY(_name, _value) \
536 PR_BEGIN_MACRO \
537 if (NS_IsMainThread()) { \
538 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR##_name, _value); \
539 } else { \
540 Telemetry::Accumulate(Telemetry::CYCLE_COLLECTOR_WORKER##_name, _value); \
541 } \
542 PR_END_MACRO
543
544 enum NodeColor { black, white, grey };
545
546 // This structure should be kept as small as possible; we may expect
547 // hundreds of thousands of them to be allocated and touched
548 // repeatedly during each cycle collection.
549
550 struct PtrInfo
551 {
552 void* mPointer;
553 nsCycleCollectionParticipant* mParticipant;
554 uint32_t mColor : 2;
555 uint32_t mInternalRefs : 30;
556 uint32_t mRefCount;
557 private:
558 EdgePool::Iterator mFirstChild;
559
560 static const uint32_t kInitialRefCount = UINT32_MAX - 1;
561
562 public:
563
PtrInfoPtrInfo564 PtrInfo(void* aPointer, nsCycleCollectionParticipant* aParticipant)
565 : mPointer(aPointer),
566 mParticipant(aParticipant),
567 mColor(grey),
568 mInternalRefs(0),
569 mRefCount(kInitialRefCount),
570 mFirstChild()
571 {
572 MOZ_ASSERT(aParticipant);
573
574 // We initialize mRefCount to a large non-zero value so
575 // that it doesn't look like a JS object to the cycle collector
576 // in the case where the object dies before being traversed.
577 MOZ_ASSERT(!IsGrayJS() && !IsBlackJS());
578 }
579
580 // Allow NodePool::NodeBlock's constructor to compile.
PtrInfoPtrInfo581 PtrInfo()
582 {
583 NS_NOTREACHED("should never be called");
584 }
585
IsGrayJSPtrInfo586 bool IsGrayJS() const
587 {
588 return mRefCount == 0;
589 }
590
IsBlackJSPtrInfo591 bool IsBlackJS() const
592 {
593 return mRefCount == UINT32_MAX;
594 }
595
WasTraversedPtrInfo596 bool WasTraversed() const
597 {
598 return mRefCount != kInitialRefCount;
599 }
600
FirstChildPtrInfo601 EdgePool::Iterator FirstChild() const
602 {
603 CC_GRAPH_ASSERT(mFirstChild.Initialized());
604 return mFirstChild;
605 }
606
607 // this PtrInfo must be part of a NodePool
LastChildPtrInfo608 EdgePool::Iterator LastChild() const
609 {
610 CC_GRAPH_ASSERT((this + 1)->mFirstChild.Initialized());
611 return (this + 1)->mFirstChild;
612 }
613
SetFirstChildPtrInfo614 void SetFirstChild(EdgePool::Iterator aFirstChild)
615 {
616 CC_GRAPH_ASSERT(aFirstChild.Initialized());
617 mFirstChild = aFirstChild;
618 }
619
620 // this PtrInfo must be part of a NodePool
SetLastChildPtrInfo621 void SetLastChild(EdgePool::Iterator aLastChild)
622 {
623 CC_GRAPH_ASSERT(aLastChild.Initialized());
624 (this + 1)->mFirstChild = aLastChild;
625 }
626 };
627
628 /**
629 * A structure designed to be used like a linked list of PtrInfo, except
630 * it allocates many PtrInfos at a time.
631 */
632 class NodePool
633 {
634 private:
635 // The -2 allows us to use |NodeBlockSize + 1| for |mEntries|, and fit
636 // |mNext|, all without causing slop.
637 enum { NodeBlockSize = 4 * 1024 - 2 };
638
639 struct NodeBlock
640 {
641 // We create and destroy NodeBlock using moz_xmalloc/free rather than new
642 // and delete to avoid calling its constructor and destructor.
NodeBlockNodePool::NodeBlock643 NodeBlock()
644 {
645 NS_NOTREACHED("should never be called");
646
647 // Ensure NodeBlock is the right size (see the comment on NodeBlockSize
648 // above).
649 static_assert(
650 sizeof(NodeBlock) == 81904 || // 32-bit; equals 19.996 x 4 KiB pages
651 sizeof(NodeBlock) == 131048, // 64-bit; equals 31.994 x 4 KiB pages
652 "ill-sized NodeBlock"
653 );
654 }
~NodeBlockNodePool::NodeBlock655 ~NodeBlock()
656 {
657 NS_NOTREACHED("should never be called");
658 }
659
660 NodeBlock* mNext;
661 PtrInfo mEntries[NodeBlockSize + 1]; // +1 to store last child of last node
662 };
663
664 public:
NodePool()665 NodePool()
666 : mBlocks(nullptr)
667 , mLast(nullptr)
668 {
669 }
670
~NodePool()671 ~NodePool()
672 {
673 MOZ_ASSERT(!mBlocks, "Didn't call Clear()?");
674 }
675
Clear()676 void Clear()
677 {
678 NodeBlock* b = mBlocks;
679 while (b) {
680 NodeBlock* n = b->mNext;
681 free(b);
682 b = n;
683 }
684
685 mBlocks = nullptr;
686 mLast = nullptr;
687 }
688
689 #ifdef DEBUG
IsEmpty()690 bool IsEmpty()
691 {
692 return !mBlocks && !mLast;
693 }
694 #endif
695
696 class Builder;
697 friend class Builder;
698 class Builder
699 {
700 public:
Builder(NodePool & aPool)701 explicit Builder(NodePool& aPool)
702 : mNextBlock(&aPool.mBlocks)
703 , mNext(aPool.mLast)
704 , mBlockEnd(nullptr)
705 {
706 MOZ_ASSERT(!aPool.mBlocks && !aPool.mLast, "pool not empty");
707 }
Add(void * aPointer,nsCycleCollectionParticipant * aParticipant)708 PtrInfo* Add(void* aPointer, nsCycleCollectionParticipant* aParticipant)
709 {
710 if (mNext == mBlockEnd) {
711 NodeBlock* block = static_cast<NodeBlock*>(malloc(sizeof(NodeBlock)));
712 if (!block) {
713 return nullptr;
714 }
715
716 *mNextBlock = block;
717 mNext = block->mEntries;
718 mBlockEnd = block->mEntries + NodeBlockSize;
719 block->mNext = nullptr;
720 mNextBlock = &block->mNext;
721 }
722 return new (mozilla::KnownNotNull, mNext++) PtrInfo(aPointer, aParticipant);
723 }
724 private:
725 NodeBlock** mNextBlock;
726 PtrInfo*& mNext;
727 PtrInfo* mBlockEnd;
728 };
729
730 class Enumerator;
731 friend class Enumerator;
732 class Enumerator
733 {
734 public:
Enumerator(NodePool & aPool)735 explicit Enumerator(NodePool& aPool)
736 : mFirstBlock(aPool.mBlocks)
737 , mCurBlock(nullptr)
738 , mNext(nullptr)
739 , mBlockEnd(nullptr)
740 , mLast(aPool.mLast)
741 {
742 }
743
IsDone() const744 bool IsDone() const
745 {
746 return mNext == mLast;
747 }
748
AtBlockEnd() const749 bool AtBlockEnd() const
750 {
751 return mNext == mBlockEnd;
752 }
753
GetNext()754 PtrInfo* GetNext()
755 {
756 MOZ_ASSERT(!IsDone(), "calling GetNext when done");
757 if (mNext == mBlockEnd) {
758 NodeBlock* nextBlock = mCurBlock ? mCurBlock->mNext : mFirstBlock;
759 mNext = nextBlock->mEntries;
760 mBlockEnd = mNext + NodeBlockSize;
761 mCurBlock = nextBlock;
762 }
763 return mNext++;
764 }
765 private:
766 // mFirstBlock is a reference to allow an Enumerator to be constructed
767 // for an empty graph.
768 NodeBlock*& mFirstBlock;
769 NodeBlock* mCurBlock;
770 // mNext is the next value we want to return, unless mNext == mBlockEnd
771 // NB: mLast is a reference to allow enumerating while building!
772 PtrInfo* mNext;
773 PtrInfo* mBlockEnd;
774 PtrInfo*& mLast;
775 };
776
SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const777 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
778 {
779 // We don't measure the things pointed to by mEntries[] because those
780 // pointers are non-owning.
781 size_t n = 0;
782 NodeBlock* b = mBlocks;
783 while (b) {
784 n += aMallocSizeOf(b);
785 b = b->mNext;
786 }
787 return n;
788 }
789
790 private:
791 NodeBlock* mBlocks;
792 PtrInfo* mLast;
793 };
794
795
796 // Declarations for mPtrToNodeMap.
797
798 struct PtrToNodeEntry : public PLDHashEntryHdr
799 {
800 // The key is mNode->mPointer
801 PtrInfo* mNode;
802 };
803
804 static bool
PtrToNodeMatchEntry(const PLDHashEntryHdr * aEntry,const void * aKey)805 PtrToNodeMatchEntry(const PLDHashEntryHdr* aEntry, const void* aKey)
806 {
807 const PtrToNodeEntry* n = static_cast<const PtrToNodeEntry*>(aEntry);
808 return n->mNode->mPointer == aKey;
809 }
810
811 static PLDHashTableOps PtrNodeOps = {
812 PLDHashTable::HashVoidPtrKeyStub,
813 PtrToNodeMatchEntry,
814 PLDHashTable::MoveEntryStub,
815 PLDHashTable::ClearEntryStub,
816 nullptr
817 };
818
819
820 struct WeakMapping
821 {
822 // map and key will be null if the corresponding objects are GC marked
823 PtrInfo* mMap;
824 PtrInfo* mKey;
825 PtrInfo* mKeyDelegate;
826 PtrInfo* mVal;
827 };
828
829 class CCGraphBuilder;
830
831 struct CCGraph
832 {
833 NodePool mNodes;
834 EdgePool mEdges;
835 nsTArray<WeakMapping> mWeakMaps;
836 uint32_t mRootCount;
837
838 private:
839 PLDHashTable mPtrToNodeMap;
840 bool mOutOfMemory;
841
842 static const uint32_t kInitialMapLength = 16384;
843
844 public:
CCGraphCCGraph845 CCGraph()
846 : mRootCount(0)
847 , mPtrToNodeMap(&PtrNodeOps, sizeof(PtrToNodeEntry), kInitialMapLength)
848 , mOutOfMemory(false)
849 {}
850
~CCGraphCCGraph851 ~CCGraph() {}
852
InitCCGraph853 void Init()
854 {
855 MOZ_ASSERT(IsEmpty(), "Failed to call CCGraph::Clear");
856 }
857
ClearCCGraph858 void Clear()
859 {
860 mNodes.Clear();
861 mEdges.Clear();
862 mWeakMaps.Clear();
863 mRootCount = 0;
864 mPtrToNodeMap.ClearAndPrepareForLength(kInitialMapLength);
865 mOutOfMemory = false;
866 }
867
868 #ifdef DEBUG
IsEmptyCCGraph869 bool IsEmpty()
870 {
871 return mNodes.IsEmpty() && mEdges.IsEmpty() &&
872 mWeakMaps.IsEmpty() && mRootCount == 0 &&
873 mPtrToNodeMap.EntryCount() == 0;
874 }
875 #endif
876
877 PtrInfo* FindNode(void* aPtr);
878 PtrToNodeEntry* AddNodeToMap(void* aPtr);
879 void RemoveObjectFromMap(void* aObject);
880
MapCountCCGraph881 uint32_t MapCount() const
882 {
883 return mPtrToNodeMap.EntryCount();
884 }
885
SizeOfExcludingThisCCGraph886 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
887 {
888 size_t n = 0;
889
890 n += mNodes.SizeOfExcludingThis(aMallocSizeOf);
891 n += mEdges.SizeOfExcludingThis(aMallocSizeOf);
892
893 // We don't measure what the WeakMappings point to, because the
894 // pointers are non-owning.
895 n += mWeakMaps.ShallowSizeOfExcludingThis(aMallocSizeOf);
896
897 n += mPtrToNodeMap.ShallowSizeOfExcludingThis(aMallocSizeOf);
898
899 return n;
900 }
901
902 private:
FindNodeEntryCCGraph903 PtrToNodeEntry* FindNodeEntry(void* aPtr)
904 {
905 return static_cast<PtrToNodeEntry*>(mPtrToNodeMap.Search(aPtr));
906 }
907 };
908
909 PtrInfo*
FindNode(void * aPtr)910 CCGraph::FindNode(void* aPtr)
911 {
912 PtrToNodeEntry* e = FindNodeEntry(aPtr);
913 return e ? e->mNode : nullptr;
914 }
915
916 PtrToNodeEntry*
AddNodeToMap(void * aPtr)917 CCGraph::AddNodeToMap(void* aPtr)
918 {
919 JS::AutoSuppressGCAnalysis suppress;
920 if (mOutOfMemory) {
921 return nullptr;
922 }
923
924 auto e = static_cast<PtrToNodeEntry*>(mPtrToNodeMap.Add(aPtr, fallible));
925 if (!e) {
926 mOutOfMemory = true;
927 MOZ_ASSERT(false, "Ran out of memory while building cycle collector graph");
928 return nullptr;
929 }
930 return e;
931 }
932
933 void
RemoveObjectFromMap(void * aObj)934 CCGraph::RemoveObjectFromMap(void* aObj)
935 {
936 PtrToNodeEntry* e = FindNodeEntry(aObj);
937 PtrInfo* pinfo = e ? e->mNode : nullptr;
938 if (pinfo) {
939 mPtrToNodeMap.RemoveEntry(e);
940
941 pinfo->mPointer = nullptr;
942 pinfo->mParticipant = nullptr;
943 }
944 }
945
946
947 static nsISupports*
CanonicalizeXPCOMParticipant(nsISupports * aIn)948 CanonicalizeXPCOMParticipant(nsISupports* aIn)
949 {
950 nsISupports* out = nullptr;
951 aIn->QueryInterface(NS_GET_IID(nsCycleCollectionISupports),
952 reinterpret_cast<void**>(&out));
953 return out;
954 }
955
956 static inline void
957 ToParticipant(nsISupports* aPtr, nsXPCOMCycleCollectionParticipant** aCp);
958
959 static void
CanonicalizeParticipant(void ** aParti,nsCycleCollectionParticipant ** aCp)960 CanonicalizeParticipant(void** aParti, nsCycleCollectionParticipant** aCp)
961 {
962 // If the participant is null, this is an nsISupports participant,
963 // so we must QI to get the real participant.
964
965 if (!*aCp) {
966 nsISupports* nsparti = static_cast<nsISupports*>(*aParti);
967 nsparti = CanonicalizeXPCOMParticipant(nsparti);
968 NS_ASSERTION(nsparti,
969 "Don't add objects that don't participate in collection!");
970 nsXPCOMCycleCollectionParticipant* xcp;
971 ToParticipant(nsparti, &xcp);
972 *aParti = nsparti;
973 *aCp = xcp;
974 }
975 }
976
977 struct nsPurpleBufferEntry
978 {
979 union
980 {
981 void* mObject; // when low bit unset
982 nsPurpleBufferEntry* mNextInFreeList; // when low bit set
983 };
984
985 nsCycleCollectingAutoRefCnt* mRefCnt;
986
987 nsCycleCollectionParticipant* mParticipant; // nullptr for nsISupports
988 };
989
990 class nsCycleCollector;
991
992 struct nsPurpleBuffer
993 {
994 private:
995 struct PurpleBlock
996 {
997 PurpleBlock* mNext;
998 // Try to match the size of a jemalloc bucket, to minimize slop bytes.
999 // - On 32-bit platforms sizeof(nsPurpleBufferEntry) is 12, so mEntries
1000 // is 16,380 bytes, which leaves 4 bytes for mNext.
1001 // - On 64-bit platforms sizeof(nsPurpleBufferEntry) is 24, so mEntries
1002 // is 32,544 bytes, which leaves 8 bytes for mNext.
1003 nsPurpleBufferEntry mEntries[1365];
1004
PurpleBlocknsPurpleBuffer::PurpleBlock1005 PurpleBlock() : mNext(nullptr)
1006 {
1007 // Ensure PurpleBlock is the right size (see above).
1008 static_assert(
1009 sizeof(PurpleBlock) == 16384 || // 32-bit
1010 sizeof(PurpleBlock) == 32768, // 64-bit
1011 "ill-sized nsPurpleBuffer::PurpleBlock"
1012 );
1013
1014 InitNextPointers();
1015 }
1016
1017 // Put all the entries in the block on the free list.
InitNextPointersnsPurpleBuffer::PurpleBlock1018 void InitNextPointers()
1019 {
1020 for (uint32_t i = 1; i < ArrayLength(mEntries); ++i) {
1021 mEntries[i - 1].mNextInFreeList =
1022 (nsPurpleBufferEntry*)(uintptr_t(mEntries + i) | 1);
1023 }
1024 mEntries[ArrayLength(mEntries) - 1].mNextInFreeList =
1025 (nsPurpleBufferEntry*)1;
1026 }
1027
1028 template<class PurpleVisitor>
VisitEntriesnsPurpleBuffer::PurpleBlock1029 void VisitEntries(nsPurpleBuffer& aBuffer, PurpleVisitor& aVisitor)
1030 {
1031 nsPurpleBufferEntry* eEnd = ArrayEnd(mEntries);
1032 for (nsPurpleBufferEntry* e = mEntries; e != eEnd; ++e) {
1033 MOZ_ASSERT(e->mObject, "There should be no null mObject when we iterate over the purple buffer");
1034 if (!(uintptr_t(e->mObject) & uintptr_t(1)) && e->mObject) {
1035 aVisitor.Visit(aBuffer, e);
1036 }
1037 }
1038 }
1039 };
1040 // This class wraps a linked list of the elements in the purple
1041 // buffer.
1042
1043 uint32_t mCount;
1044 PurpleBlock mFirstBlock;
1045 nsPurpleBufferEntry* mFreeList;
1046
1047 public:
nsPurpleBuffernsPurpleBuffer1048 nsPurpleBuffer()
1049 {
1050 InitBlocks();
1051 }
1052
~nsPurpleBuffernsPurpleBuffer1053 ~nsPurpleBuffer()
1054 {
1055 FreeBlocks();
1056 }
1057
1058 template<class PurpleVisitor>
VisitEntriesnsPurpleBuffer1059 void VisitEntries(PurpleVisitor& aVisitor)
1060 {
1061 for (PurpleBlock* b = &mFirstBlock; b; b = b->mNext) {
1062 b->VisitEntries(*this, aVisitor);
1063 }
1064 }
1065
InitBlocksnsPurpleBuffer1066 void InitBlocks()
1067 {
1068 mCount = 0;
1069 mFreeList = mFirstBlock.mEntries;
1070 }
1071
FreeBlocksnsPurpleBuffer1072 void FreeBlocks()
1073 {
1074 if (mCount > 0) {
1075 UnmarkRemainingPurple(&mFirstBlock);
1076 }
1077 PurpleBlock* b = mFirstBlock.mNext;
1078 while (b) {
1079 if (mCount > 0) {
1080 UnmarkRemainingPurple(b);
1081 }
1082 PurpleBlock* next = b->mNext;
1083 delete b;
1084 b = next;
1085 }
1086 mFirstBlock.mNext = nullptr;
1087 }
1088
1089 struct UnmarkRemainingPurpleVisitor
1090 {
1091 void
VisitnsPurpleBuffer::UnmarkRemainingPurpleVisitor1092 Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
1093 {
1094 if (aEntry->mRefCnt) {
1095 aEntry->mRefCnt->RemoveFromPurpleBuffer();
1096 aEntry->mRefCnt = nullptr;
1097 }
1098 aEntry->mObject = nullptr;
1099 --aBuffer.mCount;
1100 }
1101 };
1102
UnmarkRemainingPurplensPurpleBuffer1103 void UnmarkRemainingPurple(PurpleBlock* aBlock)
1104 {
1105 UnmarkRemainingPurpleVisitor visitor;
1106 aBlock->VisitEntries(*this, visitor);
1107 }
1108
1109 void SelectPointers(CCGraphBuilder& aBuilder);
1110
1111 // RemoveSkippable removes entries from the purple buffer synchronously
1112 // (1) if aAsyncSnowWhiteFreeing is false and nsPurpleBufferEntry::mRefCnt is 0 or
1113 // (2) if the object's nsXPCOMCycleCollectionParticipant::CanSkip() returns true or
1114 // (3) if nsPurpleBufferEntry::mRefCnt->IsPurple() is false.
1115 // (4) If removeChildlessNodes is true, then any nodes in the purple buffer
1116 // that will have no children in the cycle collector graph will also be
1117 // removed. CanSkip() may be run on these children.
1118 void RemoveSkippable(nsCycleCollector* aCollector,
1119 bool aRemoveChildlessNodes,
1120 bool aAsyncSnowWhiteFreeing,
1121 CC_ForgetSkippableCallback aCb);
1122
NewEntrynsPurpleBuffer1123 MOZ_ALWAYS_INLINE nsPurpleBufferEntry* NewEntry()
1124 {
1125 if (MOZ_UNLIKELY(!mFreeList)) {
1126 PurpleBlock* b = new PurpleBlock;
1127 mFreeList = b->mEntries;
1128
1129 // Add the new block as the second block in the list.
1130 b->mNext = mFirstBlock.mNext;
1131 mFirstBlock.mNext = b;
1132 }
1133
1134 nsPurpleBufferEntry* e = mFreeList;
1135 mFreeList = (nsPurpleBufferEntry*)
1136 (uintptr_t(mFreeList->mNextInFreeList) & ~uintptr_t(1));
1137 return e;
1138 }
1139
PutnsPurpleBuffer1140 MOZ_ALWAYS_INLINE void Put(void* aObject, nsCycleCollectionParticipant* aCp,
1141 nsCycleCollectingAutoRefCnt* aRefCnt)
1142 {
1143 nsPurpleBufferEntry* e = NewEntry();
1144
1145 ++mCount;
1146
1147 e->mObject = aObject;
1148 e->mRefCnt = aRefCnt;
1149 e->mParticipant = aCp;
1150 }
1151
RemovensPurpleBuffer1152 void Remove(nsPurpleBufferEntry* aEntry)
1153 {
1154 MOZ_ASSERT(mCount != 0, "must have entries");
1155
1156 if (aEntry->mRefCnt) {
1157 aEntry->mRefCnt->RemoveFromPurpleBuffer();
1158 aEntry->mRefCnt = nullptr;
1159 }
1160 aEntry->mNextInFreeList =
1161 (nsPurpleBufferEntry*)(uintptr_t(mFreeList) | uintptr_t(1));
1162 mFreeList = aEntry;
1163
1164 --mCount;
1165 }
1166
CountnsPurpleBuffer1167 uint32_t Count() const
1168 {
1169 return mCount;
1170 }
1171
SizeOfExcludingThisnsPurpleBuffer1172 size_t SizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
1173 {
1174 size_t n = 0;
1175
1176 // Don't measure mFirstBlock because it's within |this|.
1177 const PurpleBlock* block = mFirstBlock.mNext;
1178 while (block) {
1179 n += aMallocSizeOf(block);
1180 block = block->mNext;
1181 }
1182
1183 // mFreeList is deliberately not measured because it points into
1184 // the purple buffer, which is within mFirstBlock and thus within |this|.
1185 //
1186 // We also don't measure the things pointed to by mEntries[] because
1187 // those pointers are non-owning.
1188
1189 return n;
1190 }
1191 };
1192
1193 static bool
1194 AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
1195 nsCycleCollectionParticipant* aParti);
1196
1197 struct SelectPointersVisitor
1198 {
SelectPointersVisitorSelectPointersVisitor1199 explicit SelectPointersVisitor(CCGraphBuilder& aBuilder)
1200 : mBuilder(aBuilder)
1201 {
1202 }
1203
1204 void
VisitSelectPointersVisitor1205 Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
1206 {
1207 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
1208 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
1209 "SelectPointersVisitor: snow-white object in the purple buffer");
1210 if (!aEntry->mRefCnt->IsPurple() ||
1211 AddPurpleRoot(mBuilder, aEntry->mObject, aEntry->mParticipant)) {
1212 aBuffer.Remove(aEntry);
1213 }
1214 }
1215
1216 private:
1217 CCGraphBuilder& mBuilder;
1218 };
1219
1220 void
SelectPointers(CCGraphBuilder & aBuilder)1221 nsPurpleBuffer::SelectPointers(CCGraphBuilder& aBuilder)
1222 {
1223 SelectPointersVisitor visitor(aBuilder);
1224 VisitEntries(visitor);
1225
1226 NS_ASSERTION(mCount == 0, "AddPurpleRoot failed");
1227 if (mCount == 0) {
1228 FreeBlocks();
1229 InitBlocks();
1230 mFirstBlock.InitNextPointers();
1231 }
1232 }
1233
1234 enum ccPhase
1235 {
1236 IdlePhase,
1237 GraphBuildingPhase,
1238 ScanAndCollectWhitePhase,
1239 CleanupPhase
1240 };
1241
1242 enum ccType
1243 {
1244 SliceCC, /* If a CC is in progress, continue it. Otherwise, start a new one. */
1245 ManualCC, /* Explicitly triggered. */
1246 ShutdownCC /* Shutdown CC, used for finding leaks. */
1247 };
1248
1249 ////////////////////////////////////////////////////////////////////////
1250 // Top level structure for the cycle collector.
1251 ////////////////////////////////////////////////////////////////////////
1252
1253 using js::SliceBudget;
1254
1255 class JSPurpleBuffer;
1256
1257 class nsCycleCollector : public nsIMemoryReporter
1258 {
1259 public:
1260 NS_DECL_ISUPPORTS
1261 NS_DECL_NSIMEMORYREPORTER
1262
1263 private:
1264 bool mActivelyCollecting;
1265 bool mFreeingSnowWhite;
1266 // mScanInProgress should be false when we're collecting white objects.
1267 bool mScanInProgress;
1268 CycleCollectorResults mResults;
1269 TimeStamp mCollectionStart;
1270
1271 CycleCollectedJSContext* mJSContext;
1272
1273 ccPhase mIncrementalPhase;
1274 CCGraph mGraph;
1275 nsAutoPtr<CCGraphBuilder> mBuilder;
1276 RefPtr<nsCycleCollectorLogger> mLogger;
1277
1278 #ifdef DEBUG
1279 void* mThread;
1280 #endif
1281
1282 nsCycleCollectorParams mParams;
1283
1284 uint32_t mWhiteNodeCount;
1285
1286 CC_BeforeUnlinkCallback mBeforeUnlinkCB;
1287 CC_ForgetSkippableCallback mForgetSkippableCB;
1288
1289 nsPurpleBuffer mPurpleBuf;
1290
1291 uint32_t mUnmergedNeeded;
1292 uint32_t mMergedInARow;
1293
1294 RefPtr<JSPurpleBuffer> mJSPurpleBuffer;
1295
1296 private:
1297 virtual ~nsCycleCollector();
1298
1299 public:
1300 nsCycleCollector();
1301
1302 void RegisterJSContext(CycleCollectedJSContext* aJSContext);
1303 void ForgetJSContext();
1304
SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB)1305 void SetBeforeUnlinkCallback(CC_BeforeUnlinkCallback aBeforeUnlinkCB)
1306 {
1307 CheckThreadSafety();
1308 mBeforeUnlinkCB = aBeforeUnlinkCB;
1309 }
1310
SetForgetSkippableCallback(CC_ForgetSkippableCallback aForgetSkippableCB)1311 void SetForgetSkippableCallback(CC_ForgetSkippableCallback aForgetSkippableCB)
1312 {
1313 CheckThreadSafety();
1314 mForgetSkippableCB = aForgetSkippableCB;
1315 }
1316
1317 void Suspect(void* aPtr, nsCycleCollectionParticipant* aCp,
1318 nsCycleCollectingAutoRefCnt* aRefCnt);
1319 uint32_t SuspectedCount();
1320 void ForgetSkippable(bool aRemoveChildlessNodes, bool aAsyncSnowWhiteFreeing);
1321 bool FreeSnowWhite(bool aUntilNoSWInPurpleBuffer);
1322
1323 // This method assumes its argument is already canonicalized.
1324 void RemoveObjectFromGraph(void* aPtr);
1325
1326 void PrepareForGarbageCollection();
1327 void FinishAnyCurrentCollection();
1328
1329 bool Collect(ccType aCCType,
1330 SliceBudget& aBudget,
1331 nsICycleCollectorListener* aManualListener,
1332 bool aPreferShorterSlices = false);
1333 void Shutdown(bool aDoCollect);
1334
IsIdle() const1335 bool IsIdle() const { return mIncrementalPhase == IdlePhase; }
1336
1337 void SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
1338 size_t* aObjectSize,
1339 size_t* aGraphSize,
1340 size_t* aPurpleBufferSize) const;
1341
1342 JSPurpleBuffer* GetJSPurpleBuffer();
1343
Context()1344 CycleCollectedJSContext* Context() { return mJSContext; }
1345
1346 private:
1347 void CheckThreadSafety();
1348 void ShutdownCollect();
1349
1350 void FixGrayBits(bool aForceGC, TimeLog& aTimeLog);
1351 bool IsIncrementalGCInProgress();
1352 void FinishAnyIncrementalGCInProgress();
1353 bool ShouldMergeZones(ccType aCCType);
1354
1355 void BeginCollection(ccType aCCType, nsICycleCollectorListener* aManualListener);
1356 void MarkRoots(SliceBudget& aBudget);
1357 void ScanRoots(bool aFullySynchGraphBuild);
1358 void ScanIncrementalRoots();
1359 void ScanWhiteNodes(bool aFullySynchGraphBuild);
1360 void ScanBlackNodes();
1361 void ScanWeakMaps();
1362
1363 // returns whether anything was collected
1364 bool CollectWhite();
1365
1366 void CleanupAfterCollection();
1367 };
1368
1369 NS_IMPL_ISUPPORTS(nsCycleCollector, nsIMemoryReporter)
1370
1371 /**
1372 * GraphWalker is templatized over a Visitor class that must provide
1373 * the following two methods:
1374 *
1375 * bool ShouldVisitNode(PtrInfo const *pi);
1376 * void VisitNode(PtrInfo *pi);
1377 */
1378 template<class Visitor>
1379 class GraphWalker
1380 {
1381 private:
1382 Visitor mVisitor;
1383
1384 void DoWalk(nsDeque& aQueue);
1385
CheckedPush(nsDeque & aQueue,PtrInfo * aPi)1386 void CheckedPush(nsDeque& aQueue, PtrInfo* aPi)
1387 {
1388 if (!aPi) {
1389 MOZ_CRASH();
1390 }
1391 if (!aQueue.Push(aPi, fallible)) {
1392 mVisitor.Failed();
1393 }
1394 }
1395
1396 public:
1397 void Walk(PtrInfo* aPi);
1398 void WalkFromRoots(CCGraph& aGraph);
1399 // copy-constructing the visitor should be cheap, and less
1400 // indirection than using a reference
GraphWalker(const Visitor aVisitor)1401 explicit GraphWalker(const Visitor aVisitor) : mVisitor(aVisitor)
1402 {
1403 }
1404 };
1405
1406
1407 ////////////////////////////////////////////////////////////////////////
1408 // The static collector struct
1409 ////////////////////////////////////////////////////////////////////////
1410
1411 struct CollectorData
1412 {
1413 RefPtr<nsCycleCollector> mCollector;
1414 CycleCollectedJSContext* mContext;
1415 };
1416
1417 static MOZ_THREAD_LOCAL(CollectorData*) sCollectorData;
1418
1419 ////////////////////////////////////////////////////////////////////////
1420 // Utility functions
1421 ////////////////////////////////////////////////////////////////////////
1422
1423 static inline void
ToParticipant(nsISupports * aPtr,nsXPCOMCycleCollectionParticipant ** aCp)1424 ToParticipant(nsISupports* aPtr, nsXPCOMCycleCollectionParticipant** aCp)
1425 {
1426 // We use QI to move from an nsISupports to an
1427 // nsXPCOMCycleCollectionParticipant, which is a per-class singleton helper
1428 // object that implements traversal and unlinking logic for the nsISupports
1429 // in question.
1430 *aCp = nullptr;
1431 CallQueryInterface(aPtr, aCp);
1432 }
1433
1434 template<class Visitor>
1435 MOZ_NEVER_INLINE void
Walk(PtrInfo * aPi)1436 GraphWalker<Visitor>::Walk(PtrInfo* aPi)
1437 {
1438 nsDeque queue;
1439 CheckedPush(queue, aPi);
1440 DoWalk(queue);
1441 }
1442
1443 template<class Visitor>
1444 MOZ_NEVER_INLINE void
WalkFromRoots(CCGraph & aGraph)1445 GraphWalker<Visitor>::WalkFromRoots(CCGraph& aGraph)
1446 {
1447 nsDeque queue;
1448 NodePool::Enumerator etor(aGraph.mNodes);
1449 for (uint32_t i = 0; i < aGraph.mRootCount; ++i) {
1450 CheckedPush(queue, etor.GetNext());
1451 }
1452 DoWalk(queue);
1453 }
1454
1455 template<class Visitor>
1456 MOZ_NEVER_INLINE void
DoWalk(nsDeque & aQueue)1457 GraphWalker<Visitor>::DoWalk(nsDeque& aQueue)
1458 {
1459 // Use a aQueue to match the breadth-first traversal used when we
1460 // built the graph, for hopefully-better locality.
1461 while (aQueue.GetSize() > 0) {
1462 PtrInfo* pi = static_cast<PtrInfo*>(aQueue.PopFront());
1463
1464 if (pi->WasTraversed() && mVisitor.ShouldVisitNode(pi)) {
1465 mVisitor.VisitNode(pi);
1466 for (EdgePool::Iterator child = pi->FirstChild(),
1467 child_end = pi->LastChild();
1468 child != child_end; ++child) {
1469 CheckedPush(aQueue, *child);
1470 }
1471 }
1472 }
1473 }
1474
1475 struct CCGraphDescriber : public LinkedListElement<CCGraphDescriber>
1476 {
CCGraphDescriberCCGraphDescriber1477 CCGraphDescriber()
1478 : mAddress("0x"), mCnt(0), mType(eUnknown)
1479 {
1480 }
1481
1482 enum Type
1483 {
1484 eRefCountedObject,
1485 eGCedObject,
1486 eGCMarkedObject,
1487 eEdge,
1488 eRoot,
1489 eGarbage,
1490 eUnknown
1491 };
1492
1493 nsCString mAddress;
1494 nsCString mName;
1495 nsCString mCompartmentOrToAddress;
1496 uint32_t mCnt;
1497 Type mType;
1498 };
1499
1500 class LogStringMessageAsync : public CancelableRunnable
1501 {
1502 public:
LogStringMessageAsync(const nsAString & aMsg)1503 explicit LogStringMessageAsync(const nsAString& aMsg) : mMsg(aMsg)
1504 {}
1505
Run()1506 NS_IMETHOD Run() override
1507 {
1508 nsCOMPtr<nsIConsoleService> cs =
1509 do_GetService(NS_CONSOLESERVICE_CONTRACTID);
1510 if (cs) {
1511 cs->LogStringMessage(mMsg.get());
1512 }
1513 return NS_OK;
1514 }
1515
1516 private:
1517 nsString mMsg;
1518 };
1519
1520 class nsCycleCollectorLogSinkToFile final : public nsICycleCollectorLogSink
1521 {
1522 public:
1523 NS_DECL_ISUPPORTS
1524
nsCycleCollectorLogSinkToFile()1525 nsCycleCollectorLogSinkToFile() :
1526 mProcessIdentifier(base::GetCurrentProcId()),
1527 mGCLog("gc-edges"), mCCLog("cc-edges")
1528 {
1529 }
1530
GetFilenameIdentifier(nsAString & aIdentifier)1531 NS_IMETHOD GetFilenameIdentifier(nsAString& aIdentifier) override
1532 {
1533 aIdentifier = mFilenameIdentifier;
1534 return NS_OK;
1535 }
1536
SetFilenameIdentifier(const nsAString & aIdentifier)1537 NS_IMETHOD SetFilenameIdentifier(const nsAString& aIdentifier) override
1538 {
1539 mFilenameIdentifier = aIdentifier;
1540 return NS_OK;
1541 }
1542
GetProcessIdentifier(int32_t * aIdentifier)1543 NS_IMETHOD GetProcessIdentifier(int32_t* aIdentifier) override
1544 {
1545 *aIdentifier = mProcessIdentifier;
1546 return NS_OK;
1547 }
1548
SetProcessIdentifier(int32_t aIdentifier)1549 NS_IMETHOD SetProcessIdentifier(int32_t aIdentifier) override
1550 {
1551 mProcessIdentifier = aIdentifier;
1552 return NS_OK;
1553 }
1554
GetGcLog(nsIFile ** aPath)1555 NS_IMETHOD GetGcLog(nsIFile** aPath) override
1556 {
1557 NS_IF_ADDREF(*aPath = mGCLog.mFile);
1558 return NS_OK;
1559 }
1560
GetCcLog(nsIFile ** aPath)1561 NS_IMETHOD GetCcLog(nsIFile** aPath) override
1562 {
1563 NS_IF_ADDREF(*aPath = mCCLog.mFile);
1564 return NS_OK;
1565 }
1566
Open(FILE ** aGCLog,FILE ** aCCLog)1567 NS_IMETHOD Open(FILE** aGCLog, FILE** aCCLog) override
1568 {
1569 nsresult rv;
1570
1571 if (mGCLog.mStream || mCCLog.mStream) {
1572 return NS_ERROR_UNEXPECTED;
1573 }
1574
1575 rv = OpenLog(&mGCLog);
1576 NS_ENSURE_SUCCESS(rv, rv);
1577 *aGCLog = mGCLog.mStream;
1578
1579 rv = OpenLog(&mCCLog);
1580 NS_ENSURE_SUCCESS(rv, rv);
1581 *aCCLog = mCCLog.mStream;
1582
1583 return NS_OK;
1584 }
1585
CloseGCLog()1586 NS_IMETHOD CloseGCLog() override
1587 {
1588 if (!mGCLog.mStream) {
1589 return NS_ERROR_UNEXPECTED;
1590 }
1591 CloseLog(&mGCLog, NS_LITERAL_STRING("Garbage"));
1592 return NS_OK;
1593 }
1594
CloseCCLog()1595 NS_IMETHOD CloseCCLog() override
1596 {
1597 if (!mCCLog.mStream) {
1598 return NS_ERROR_UNEXPECTED;
1599 }
1600 CloseLog(&mCCLog, NS_LITERAL_STRING("Cycle"));
1601 return NS_OK;
1602 }
1603
1604 private:
~nsCycleCollectorLogSinkToFile()1605 ~nsCycleCollectorLogSinkToFile()
1606 {
1607 if (mGCLog.mStream) {
1608 MozillaUnRegisterDebugFILE(mGCLog.mStream);
1609 fclose(mGCLog.mStream);
1610 }
1611 if (mCCLog.mStream) {
1612 MozillaUnRegisterDebugFILE(mCCLog.mStream);
1613 fclose(mCCLog.mStream);
1614 }
1615 }
1616
1617 struct FileInfo
1618 {
1619 const char* const mPrefix;
1620 nsCOMPtr<nsIFile> mFile;
1621 FILE* mStream;
1622
FileInfonsCycleCollectorLogSinkToFile::FileInfo1623 explicit FileInfo(const char* aPrefix) : mPrefix(aPrefix), mStream(nullptr) { }
1624 };
1625
1626 /**
1627 * Create a new file named something like aPrefix.$PID.$IDENTIFIER.log in
1628 * $MOZ_CC_LOG_DIRECTORY or in the system's temp directory. No existing
1629 * file will be overwritten; if aPrefix.$PID.$IDENTIFIER.log exists, we'll
1630 * try a file named something like aPrefix.$PID.$IDENTIFIER-1.log, and so
1631 * on.
1632 */
CreateTempFile(const char * aPrefix)1633 already_AddRefed<nsIFile> CreateTempFile(const char* aPrefix)
1634 {
1635 nsPrintfCString filename("%s.%d%s%s.log",
1636 aPrefix,
1637 mProcessIdentifier,
1638 mFilenameIdentifier.IsEmpty() ? "" : ".",
1639 NS_ConvertUTF16toUTF8(mFilenameIdentifier).get());
1640
1641 // Get the log directory either from $MOZ_CC_LOG_DIRECTORY or from
1642 // the fallback directories in OpenTempFile. We don't use an nsCOMPtr
1643 // here because OpenTempFile uses an in/out param and getter_AddRefs
1644 // wouldn't work.
1645 nsIFile* logFile = nullptr;
1646 if (char* env = PR_GetEnv("MOZ_CC_LOG_DIRECTORY")) {
1647 NS_NewNativeLocalFile(nsCString(env), /* followLinks = */ true,
1648 &logFile);
1649 }
1650
1651 // On Android or B2G, this function will open a file named
1652 // aFilename under a memory-reporting-specific folder
1653 // (/data/local/tmp/memory-reports). Otherwise, it will open a
1654 // file named aFilename under "NS_OS_TEMP_DIR".
1655 nsresult rv = nsDumpUtils::OpenTempFile(filename, &logFile,
1656 NS_LITERAL_CSTRING("memory-reports"));
1657 if (NS_FAILED(rv)) {
1658 NS_IF_RELEASE(logFile);
1659 return nullptr;
1660 }
1661
1662 return dont_AddRef(logFile);
1663 }
1664
OpenLog(FileInfo * aLog)1665 nsresult OpenLog(FileInfo* aLog)
1666 {
1667 // Initially create the log in a file starting with "incomplete-".
1668 // We'll move the file and strip off the "incomplete-" once the dump
1669 // completes. (We do this because we don't want scripts which poll
1670 // the filesystem looking for GC/CC dumps to grab a file before we're
1671 // finished writing to it.)
1672 nsAutoCString incomplete;
1673 incomplete += "incomplete-";
1674 incomplete += aLog->mPrefix;
1675 MOZ_ASSERT(!aLog->mFile);
1676 aLog->mFile = CreateTempFile(incomplete.get());
1677 if (NS_WARN_IF(!aLog->mFile)) {
1678 return NS_ERROR_UNEXPECTED;
1679 }
1680
1681 MOZ_ASSERT(!aLog->mStream);
1682 nsresult rv = aLog->mFile->OpenANSIFileDesc("w", &aLog->mStream);
1683 if (NS_WARN_IF(NS_FAILED(rv))) {
1684 return NS_ERROR_UNEXPECTED;
1685 }
1686 MozillaRegisterDebugFILE(aLog->mStream);
1687 return NS_OK;
1688 }
1689
CloseLog(FileInfo * aLog,const nsAString & aCollectorKind)1690 nsresult CloseLog(FileInfo* aLog, const nsAString& aCollectorKind)
1691 {
1692 MOZ_ASSERT(aLog->mStream);
1693 MOZ_ASSERT(aLog->mFile);
1694
1695 MozillaUnRegisterDebugFILE(aLog->mStream);
1696 fclose(aLog->mStream);
1697 aLog->mStream = nullptr;
1698
1699 // Strip off "incomplete-".
1700 nsCOMPtr<nsIFile> logFileFinalDestination =
1701 CreateTempFile(aLog->mPrefix);
1702 if (NS_WARN_IF(!logFileFinalDestination)) {
1703 return NS_ERROR_UNEXPECTED;
1704 }
1705
1706 nsAutoString logFileFinalDestinationName;
1707 logFileFinalDestination->GetLeafName(logFileFinalDestinationName);
1708 if (NS_WARN_IF(logFileFinalDestinationName.IsEmpty())) {
1709 return NS_ERROR_UNEXPECTED;
1710 }
1711
1712 aLog->mFile->MoveTo(/* directory */ nullptr, logFileFinalDestinationName);
1713
1714 // Save the file path.
1715 aLog->mFile = logFileFinalDestination;
1716
1717 // Log to the error console.
1718 nsAutoString logPath;
1719 logFileFinalDestination->GetPath(logPath);
1720 nsAutoString msg = aCollectorKind +
1721 NS_LITERAL_STRING(" Collector log dumped to ") + logPath;
1722
1723 // We don't want any JS to run between ScanRoots and CollectWhite calls,
1724 // and since ScanRoots calls this method, better to log the message
1725 // asynchronously.
1726 RefPtr<LogStringMessageAsync> log = new LogStringMessageAsync(msg);
1727 NS_DispatchToCurrentThread(log);
1728 return NS_OK;
1729 }
1730
1731 int32_t mProcessIdentifier;
1732 nsString mFilenameIdentifier;
1733 FileInfo mGCLog;
1734 FileInfo mCCLog;
1735 };
1736
1737 NS_IMPL_ISUPPORTS(nsCycleCollectorLogSinkToFile, nsICycleCollectorLogSink)
1738
1739
1740 class nsCycleCollectorLogger final : public nsICycleCollectorListener
1741 {
~nsCycleCollectorLogger()1742 ~nsCycleCollectorLogger()
1743 {
1744 ClearDescribers();
1745 }
1746
1747 public:
nsCycleCollectorLogger()1748 nsCycleCollectorLogger()
1749 : mLogSink(nsCycleCollector_createLogSink())
1750 , mWantAllTraces(false)
1751 , mDisableLog(false)
1752 , mWantAfterProcessing(false)
1753 , mCCLog(nullptr)
1754 {
1755 }
1756
1757 NS_DECL_ISUPPORTS
1758
SetAllTraces()1759 void SetAllTraces()
1760 {
1761 mWantAllTraces = true;
1762 }
1763
IsAllTraces()1764 bool IsAllTraces()
1765 {
1766 return mWantAllTraces;
1767 }
1768
AllTraces(nsICycleCollectorListener ** aListener)1769 NS_IMETHOD AllTraces(nsICycleCollectorListener** aListener) override
1770 {
1771 SetAllTraces();
1772 NS_ADDREF(*aListener = this);
1773 return NS_OK;
1774 }
1775
GetWantAllTraces(bool * aAllTraces)1776 NS_IMETHOD GetWantAllTraces(bool* aAllTraces) override
1777 {
1778 *aAllTraces = mWantAllTraces;
1779 return NS_OK;
1780 }
1781
GetDisableLog(bool * aDisableLog)1782 NS_IMETHOD GetDisableLog(bool* aDisableLog) override
1783 {
1784 *aDisableLog = mDisableLog;
1785 return NS_OK;
1786 }
1787
SetDisableLog(bool aDisableLog)1788 NS_IMETHOD SetDisableLog(bool aDisableLog) override
1789 {
1790 mDisableLog = aDisableLog;
1791 return NS_OK;
1792 }
1793
GetWantAfterProcessing(bool * aWantAfterProcessing)1794 NS_IMETHOD GetWantAfterProcessing(bool* aWantAfterProcessing) override
1795 {
1796 *aWantAfterProcessing = mWantAfterProcessing;
1797 return NS_OK;
1798 }
1799
SetWantAfterProcessing(bool aWantAfterProcessing)1800 NS_IMETHOD SetWantAfterProcessing(bool aWantAfterProcessing) override
1801 {
1802 mWantAfterProcessing = aWantAfterProcessing;
1803 return NS_OK;
1804 }
1805
GetLogSink(nsICycleCollectorLogSink ** aLogSink)1806 NS_IMETHOD GetLogSink(nsICycleCollectorLogSink** aLogSink) override
1807 {
1808 NS_ADDREF(*aLogSink = mLogSink);
1809 return NS_OK;
1810 }
1811
SetLogSink(nsICycleCollectorLogSink * aLogSink)1812 NS_IMETHOD SetLogSink(nsICycleCollectorLogSink* aLogSink) override
1813 {
1814 if (!aLogSink) {
1815 return NS_ERROR_INVALID_ARG;
1816 }
1817 mLogSink = aLogSink;
1818 return NS_OK;
1819 }
1820
Begin()1821 nsresult Begin()
1822 {
1823 nsresult rv;
1824
1825 mCurrentAddress.AssignLiteral("0x");
1826 ClearDescribers();
1827 if (mDisableLog) {
1828 return NS_OK;
1829 }
1830
1831 FILE* gcLog;
1832 rv = mLogSink->Open(&gcLog, &mCCLog);
1833 NS_ENSURE_SUCCESS(rv, rv);
1834 // Dump the JS heap.
1835 CollectorData* data = sCollectorData.get();
1836 if (data && data->mContext) {
1837 data->mContext->DumpJSHeap(gcLog);
1838 }
1839 rv = mLogSink->CloseGCLog();
1840 NS_ENSURE_SUCCESS(rv, rv);
1841
1842 fprintf(mCCLog, "# WantAllTraces=%s\n", mWantAllTraces ? "true" : "false");
1843 return NS_OK;
1844 }
NoteRefCountedObject(uint64_t aAddress,uint32_t aRefCount,const char * aObjectDescription)1845 void NoteRefCountedObject(uint64_t aAddress, uint32_t aRefCount,
1846 const char* aObjectDescription)
1847 {
1848 if (!mDisableLog) {
1849 fprintf(mCCLog, "%p [rc=%u] %s\n", (void*)aAddress, aRefCount,
1850 aObjectDescription);
1851 }
1852 if (mWantAfterProcessing) {
1853 CCGraphDescriber* d = new CCGraphDescriber();
1854 mDescribers.insertBack(d);
1855 mCurrentAddress.AssignLiteral("0x");
1856 mCurrentAddress.AppendInt(aAddress, 16);
1857 d->mType = CCGraphDescriber::eRefCountedObject;
1858 d->mAddress = mCurrentAddress;
1859 d->mCnt = aRefCount;
1860 d->mName.Append(aObjectDescription);
1861 }
1862 }
NoteGCedObject(uint64_t aAddress,bool aMarked,const char * aObjectDescription,uint64_t aCompartmentAddress)1863 void NoteGCedObject(uint64_t aAddress, bool aMarked,
1864 const char* aObjectDescription,
1865 uint64_t aCompartmentAddress)
1866 {
1867 if (!mDisableLog) {
1868 fprintf(mCCLog, "%p [gc%s] %s\n", (void*)aAddress,
1869 aMarked ? ".marked" : "", aObjectDescription);
1870 }
1871 if (mWantAfterProcessing) {
1872 CCGraphDescriber* d = new CCGraphDescriber();
1873 mDescribers.insertBack(d);
1874 mCurrentAddress.AssignLiteral("0x");
1875 mCurrentAddress.AppendInt(aAddress, 16);
1876 d->mType = aMarked ? CCGraphDescriber::eGCMarkedObject :
1877 CCGraphDescriber::eGCedObject;
1878 d->mAddress = mCurrentAddress;
1879 d->mName.Append(aObjectDescription);
1880 if (aCompartmentAddress) {
1881 d->mCompartmentOrToAddress.AssignLiteral("0x");
1882 d->mCompartmentOrToAddress.AppendInt(aCompartmentAddress, 16);
1883 } else {
1884 d->mCompartmentOrToAddress.SetIsVoid(true);
1885 }
1886 }
1887 }
NoteEdge(uint64_t aToAddress,const char * aEdgeName)1888 void NoteEdge(uint64_t aToAddress, const char* aEdgeName)
1889 {
1890 if (!mDisableLog) {
1891 fprintf(mCCLog, "> %p %s\n", (void*)aToAddress, aEdgeName);
1892 }
1893 if (mWantAfterProcessing) {
1894 CCGraphDescriber* d = new CCGraphDescriber();
1895 mDescribers.insertBack(d);
1896 d->mType = CCGraphDescriber::eEdge;
1897 d->mAddress = mCurrentAddress;
1898 d->mCompartmentOrToAddress.AssignLiteral("0x");
1899 d->mCompartmentOrToAddress.AppendInt(aToAddress, 16);
1900 d->mName.Append(aEdgeName);
1901 }
1902 }
NoteWeakMapEntry(uint64_t aMap,uint64_t aKey,uint64_t aKeyDelegate,uint64_t aValue)1903 void NoteWeakMapEntry(uint64_t aMap, uint64_t aKey,
1904 uint64_t aKeyDelegate, uint64_t aValue)
1905 {
1906 if (!mDisableLog) {
1907 fprintf(mCCLog, "WeakMapEntry map=%p key=%p keyDelegate=%p value=%p\n",
1908 (void*)aMap, (void*)aKey, (void*)aKeyDelegate, (void*)aValue);
1909 }
1910 // We don't support after-processing for weak map entries.
1911 }
NoteIncrementalRoot(uint64_t aAddress)1912 void NoteIncrementalRoot(uint64_t aAddress)
1913 {
1914 if (!mDisableLog) {
1915 fprintf(mCCLog, "IncrementalRoot %p\n", (void*)aAddress);
1916 }
1917 // We don't support after-processing for incremental roots.
1918 }
BeginResults()1919 void BeginResults()
1920 {
1921 if (!mDisableLog) {
1922 fputs("==========\n", mCCLog);
1923 }
1924 }
DescribeRoot(uint64_t aAddress,uint32_t aKnownEdges)1925 void DescribeRoot(uint64_t aAddress, uint32_t aKnownEdges)
1926 {
1927 if (!mDisableLog) {
1928 fprintf(mCCLog, "%p [known=%u]\n", (void*)aAddress, aKnownEdges);
1929 }
1930 if (mWantAfterProcessing) {
1931 CCGraphDescriber* d = new CCGraphDescriber();
1932 mDescribers.insertBack(d);
1933 d->mType = CCGraphDescriber::eRoot;
1934 d->mAddress.AppendInt(aAddress, 16);
1935 d->mCnt = aKnownEdges;
1936 }
1937 }
DescribeGarbage(uint64_t aAddress)1938 void DescribeGarbage(uint64_t aAddress)
1939 {
1940 if (!mDisableLog) {
1941 fprintf(mCCLog, "%p [garbage]\n", (void*)aAddress);
1942 }
1943 if (mWantAfterProcessing) {
1944 CCGraphDescriber* d = new CCGraphDescriber();
1945 mDescribers.insertBack(d);
1946 d->mType = CCGraphDescriber::eGarbage;
1947 d->mAddress.AppendInt(aAddress, 16);
1948 }
1949 }
End()1950 void End()
1951 {
1952 if (!mDisableLog) {
1953 mCCLog = nullptr;
1954 Unused << NS_WARN_IF(NS_FAILED(mLogSink->CloseCCLog()));
1955 }
1956 }
ProcessNext(nsICycleCollectorHandler * aHandler,bool * aCanContinue)1957 NS_IMETHOD ProcessNext(nsICycleCollectorHandler* aHandler,
1958 bool* aCanContinue) override
1959 {
1960 if (NS_WARN_IF(!aHandler) || NS_WARN_IF(!mWantAfterProcessing)) {
1961 return NS_ERROR_UNEXPECTED;
1962 }
1963 CCGraphDescriber* d = mDescribers.popFirst();
1964 if (d) {
1965 switch (d->mType) {
1966 case CCGraphDescriber::eRefCountedObject:
1967 aHandler->NoteRefCountedObject(d->mAddress,
1968 d->mCnt,
1969 d->mName);
1970 break;
1971 case CCGraphDescriber::eGCedObject:
1972 case CCGraphDescriber::eGCMarkedObject:
1973 aHandler->NoteGCedObject(d->mAddress,
1974 d->mType ==
1975 CCGraphDescriber::eGCMarkedObject,
1976 d->mName,
1977 d->mCompartmentOrToAddress);
1978 break;
1979 case CCGraphDescriber::eEdge:
1980 aHandler->NoteEdge(d->mAddress,
1981 d->mCompartmentOrToAddress,
1982 d->mName);
1983 break;
1984 case CCGraphDescriber::eRoot:
1985 aHandler->DescribeRoot(d->mAddress,
1986 d->mCnt);
1987 break;
1988 case CCGraphDescriber::eGarbage:
1989 aHandler->DescribeGarbage(d->mAddress);
1990 break;
1991 case CCGraphDescriber::eUnknown:
1992 NS_NOTREACHED("CCGraphDescriber::eUnknown");
1993 break;
1994 }
1995 delete d;
1996 }
1997 if (!(*aCanContinue = !mDescribers.isEmpty())) {
1998 mCurrentAddress.AssignLiteral("0x");
1999 }
2000 return NS_OK;
2001 }
AsLogger(nsCycleCollectorLogger ** aRetVal)2002 NS_IMETHOD AsLogger(nsCycleCollectorLogger** aRetVal) override
2003 {
2004 RefPtr<nsCycleCollectorLogger> rval = this;
2005 rval.forget(aRetVal);
2006 return NS_OK;
2007 }
2008 private:
ClearDescribers()2009 void ClearDescribers()
2010 {
2011 CCGraphDescriber* d;
2012 while ((d = mDescribers.popFirst())) {
2013 delete d;
2014 }
2015 }
2016
2017 nsCOMPtr<nsICycleCollectorLogSink> mLogSink;
2018 bool mWantAllTraces;
2019 bool mDisableLog;
2020 bool mWantAfterProcessing;
2021 nsCString mCurrentAddress;
2022 mozilla::LinkedList<CCGraphDescriber> mDescribers;
2023 FILE* mCCLog;
2024 };
2025
NS_IMPL_ISUPPORTS(nsCycleCollectorLogger,nsICycleCollectorListener)2026 NS_IMPL_ISUPPORTS(nsCycleCollectorLogger, nsICycleCollectorListener)
2027
2028 nsresult
2029 nsCycleCollectorLoggerConstructor(nsISupports* aOuter,
2030 const nsIID& aIID,
2031 void** aInstancePtr)
2032 {
2033 if (NS_WARN_IF(aOuter)) {
2034 return NS_ERROR_NO_AGGREGATION;
2035 }
2036
2037 nsISupports* logger = new nsCycleCollectorLogger();
2038
2039 return logger->QueryInterface(aIID, aInstancePtr);
2040 }
2041
2042 static bool
GCThingIsGrayCCThing(JS::GCCellPtr thing)2043 GCThingIsGrayCCThing(JS::GCCellPtr thing)
2044 {
2045 return AddToCCKind(thing.kind()) &&
2046 JS::GCThingIsMarkedGray(thing);
2047 }
2048
2049 static bool
ValueIsGrayCCThing(const JS::Value & value)2050 ValueIsGrayCCThing(const JS::Value& value)
2051 {
2052 return AddToCCKind(value.traceKind()) &&
2053 JS::GCThingIsMarkedGray(value.toGCCellPtr());
2054 }
2055
2056 ////////////////////////////////////////////////////////////////////////
2057 // Bacon & Rajan's |MarkRoots| routine.
2058 ////////////////////////////////////////////////////////////////////////
2059
2060 class CCGraphBuilder final : public nsCycleCollectionTraversalCallback,
2061 public nsCycleCollectionNoteRootCallback
2062 {
2063 private:
2064 CCGraph& mGraph;
2065 CycleCollectorResults& mResults;
2066 NodePool::Builder mNodeBuilder;
2067 EdgePool::Builder mEdgeBuilder;
2068 MOZ_INIT_OUTSIDE_CTOR PtrInfo* mCurrPi;
2069 nsCycleCollectionParticipant* mJSParticipant;
2070 nsCycleCollectionParticipant* mJSZoneParticipant;
2071 nsCString mNextEdgeName;
2072 RefPtr<nsCycleCollectorLogger> mLogger;
2073 bool mMergeZones;
2074 nsAutoPtr<NodePool::Enumerator> mCurrNode;
2075
2076 public:
2077 CCGraphBuilder(CCGraph& aGraph,
2078 CycleCollectorResults& aResults,
2079 CycleCollectedJSContext* aJSContext,
2080 nsCycleCollectorLogger* aLogger,
2081 bool aMergeZones);
2082 virtual ~CCGraphBuilder();
2083
WantAllTraces() const2084 bool WantAllTraces() const
2085 {
2086 return nsCycleCollectionNoteRootCallback::WantAllTraces();
2087 }
2088
2089 bool AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti);
2090
2091 // This is called when all roots have been added to the graph, to prepare for BuildGraph().
2092 void DoneAddingRoots();
2093
2094 // Do some work traversing nodes in the graph. Returns true if this graph building is finished.
2095 bool BuildGraph(SliceBudget& aBudget);
2096
2097 private:
2098 PtrInfo* AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant);
2099 PtrInfo* AddWeakMapNode(JS::GCCellPtr aThing);
2100 PtrInfo* AddWeakMapNode(JSObject* aObject);
2101
SetFirstChild()2102 void SetFirstChild()
2103 {
2104 mCurrPi->SetFirstChild(mEdgeBuilder.Mark());
2105 }
2106
SetLastChild()2107 void SetLastChild()
2108 {
2109 mCurrPi->SetLastChild(mEdgeBuilder.Mark());
2110 }
2111
2112 public:
2113 // nsCycleCollectionNoteRootCallback methods.
2114 NS_IMETHOD_(void) NoteXPCOMRoot(nsISupports* aRoot);
2115 NS_IMETHOD_(void) NoteJSRoot(JSObject* aRoot);
2116 NS_IMETHOD_(void) NoteNativeRoot(void* aRoot,
2117 nsCycleCollectionParticipant* aParticipant);
2118 NS_IMETHOD_(void) NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
2119 JSObject* aKdelegate, JS::GCCellPtr aVal);
2120
2121 // nsCycleCollectionTraversalCallback methods.
2122 NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt aRefCount,
2123 const char* aObjName);
2124 NS_IMETHOD_(void) DescribeGCedNode(bool aIsMarked, const char* aObjName,
2125 uint64_t aCompartmentAddress);
2126
2127 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild);
2128 NS_IMETHOD_(void) NoteJSChild(const JS::GCCellPtr& aThing);
2129 NS_IMETHOD_(void) NoteNativeChild(void* aChild,
2130 nsCycleCollectionParticipant* aParticipant);
2131 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName);
2132
2133 private:
2134 void NoteJSChild(JS::GCCellPtr aChild);
2135
NoteRoot(void * aRoot,nsCycleCollectionParticipant * aParticipant)2136 NS_IMETHOD_(void) NoteRoot(void* aRoot,
2137 nsCycleCollectionParticipant* aParticipant)
2138 {
2139 MOZ_ASSERT(aRoot);
2140 MOZ_ASSERT(aParticipant);
2141
2142 if (!aParticipant->CanSkipInCC(aRoot) || MOZ_UNLIKELY(WantAllTraces())) {
2143 AddNode(aRoot, aParticipant);
2144 }
2145 }
2146
NoteChild(void * aChild,nsCycleCollectionParticipant * aCp,nsCString & aEdgeName)2147 NS_IMETHOD_(void) NoteChild(void* aChild, nsCycleCollectionParticipant* aCp,
2148 nsCString& aEdgeName)
2149 {
2150 PtrInfo* childPi = AddNode(aChild, aCp);
2151 if (!childPi) {
2152 return;
2153 }
2154 mEdgeBuilder.Add(childPi);
2155 if (mLogger) {
2156 mLogger->NoteEdge((uint64_t)aChild, aEdgeName.get());
2157 }
2158 ++childPi->mInternalRefs;
2159 }
2160
MergeZone(JS::GCCellPtr aGcthing)2161 JS::Zone* MergeZone(JS::GCCellPtr aGcthing)
2162 {
2163 if (!mMergeZones) {
2164 return nullptr;
2165 }
2166 JS::Zone* zone = JS::GetTenuredGCThingZone(aGcthing);
2167 if (js::IsSystemZone(zone)) {
2168 return nullptr;
2169 }
2170 return zone;
2171 }
2172 };
2173
CCGraphBuilder(CCGraph & aGraph,CycleCollectorResults & aResults,CycleCollectedJSContext * aJSContext,nsCycleCollectorLogger * aLogger,bool aMergeZones)2174 CCGraphBuilder::CCGraphBuilder(CCGraph& aGraph,
2175 CycleCollectorResults& aResults,
2176 CycleCollectedJSContext* aJSContext,
2177 nsCycleCollectorLogger* aLogger,
2178 bool aMergeZones)
2179 : mGraph(aGraph)
2180 , mResults(aResults)
2181 , mNodeBuilder(aGraph.mNodes)
2182 , mEdgeBuilder(aGraph.mEdges)
2183 , mJSParticipant(nullptr)
2184 , mJSZoneParticipant(nullptr)
2185 , mLogger(aLogger)
2186 , mMergeZones(aMergeZones)
2187 {
2188 if (aJSContext) {
2189 mJSParticipant = aJSContext->GCThingParticipant();
2190 mJSZoneParticipant = aJSContext->ZoneParticipant();
2191 }
2192
2193 if (mLogger) {
2194 mFlags |= nsCycleCollectionTraversalCallback::WANT_DEBUG_INFO;
2195 if (mLogger->IsAllTraces()) {
2196 mFlags |= nsCycleCollectionTraversalCallback::WANT_ALL_TRACES;
2197 mWantAllTraces = true; // for nsCycleCollectionNoteRootCallback
2198 }
2199 }
2200
2201 mMergeZones = mMergeZones && MOZ_LIKELY(!WantAllTraces());
2202
2203 MOZ_ASSERT(nsCycleCollectionNoteRootCallback::WantAllTraces() ==
2204 nsCycleCollectionTraversalCallback::WantAllTraces());
2205 }
2206
~CCGraphBuilder()2207 CCGraphBuilder::~CCGraphBuilder()
2208 {
2209 }
2210
2211 PtrInfo*
AddNode(void * aPtr,nsCycleCollectionParticipant * aParticipant)2212 CCGraphBuilder::AddNode(void* aPtr, nsCycleCollectionParticipant* aParticipant)
2213 {
2214 PtrToNodeEntry* e = mGraph.AddNodeToMap(aPtr);
2215 if (!e) {
2216 return nullptr;
2217 }
2218
2219 PtrInfo* result;
2220 if (!e->mNode) {
2221 // New entry.
2222 result = mNodeBuilder.Add(aPtr, aParticipant);
2223 if (!result) {
2224 return nullptr;
2225 }
2226
2227 e->mNode = result;
2228 NS_ASSERTION(result, "mNodeBuilder.Add returned null");
2229 } else {
2230 result = e->mNode;
2231 MOZ_ASSERT(result->mParticipant == aParticipant,
2232 "nsCycleCollectionParticipant shouldn't change!");
2233 }
2234 return result;
2235 }
2236
2237 bool
AddPurpleRoot(void * aRoot,nsCycleCollectionParticipant * aParti)2238 CCGraphBuilder::AddPurpleRoot(void* aRoot, nsCycleCollectionParticipant* aParti)
2239 {
2240 CanonicalizeParticipant(&aRoot, &aParti);
2241
2242 if (WantAllTraces() || !aParti->CanSkipInCC(aRoot)) {
2243 PtrInfo* pinfo = AddNode(aRoot, aParti);
2244 if (!pinfo) {
2245 return false;
2246 }
2247 }
2248
2249 return true;
2250 }
2251
2252 void
DoneAddingRoots()2253 CCGraphBuilder::DoneAddingRoots()
2254 {
2255 // We've finished adding roots, and everything in the graph is a root.
2256 mGraph.mRootCount = mGraph.MapCount();
2257
2258 mCurrNode = new NodePool::Enumerator(mGraph.mNodes);
2259 }
2260
2261 MOZ_NEVER_INLINE bool
BuildGraph(SliceBudget & aBudget)2262 CCGraphBuilder::BuildGraph(SliceBudget& aBudget)
2263 {
2264 const intptr_t kNumNodesBetweenTimeChecks = 1000;
2265 const intptr_t kStep = SliceBudget::CounterReset / kNumNodesBetweenTimeChecks;
2266
2267 MOZ_ASSERT(mCurrNode);
2268
2269 while (!aBudget.isOverBudget() && !mCurrNode->IsDone()) {
2270 PtrInfo* pi = mCurrNode->GetNext();
2271 if (!pi) {
2272 MOZ_CRASH();
2273 }
2274
2275 mCurrPi = pi;
2276
2277 // We need to call SetFirstChild() even on deleted nodes, to set their
2278 // firstChild() that may be read by a prior non-deleted neighbor.
2279 SetFirstChild();
2280
2281 if (pi->mParticipant) {
2282 nsresult rv = pi->mParticipant->Traverse(pi->mPointer, *this);
2283 MOZ_RELEASE_ASSERT(!NS_FAILED(rv), "Cycle collector Traverse method failed");
2284 }
2285
2286 if (mCurrNode->AtBlockEnd()) {
2287 SetLastChild();
2288 }
2289
2290 aBudget.step(kStep);
2291 }
2292
2293 if (!mCurrNode->IsDone()) {
2294 return false;
2295 }
2296
2297 if (mGraph.mRootCount > 0) {
2298 SetLastChild();
2299 }
2300
2301 mCurrNode = nullptr;
2302
2303 return true;
2304 }
2305
NS_IMETHODIMP_(void)2306 NS_IMETHODIMP_(void)
2307 CCGraphBuilder::NoteXPCOMRoot(nsISupports* aRoot)
2308 {
2309 aRoot = CanonicalizeXPCOMParticipant(aRoot);
2310 NS_ASSERTION(aRoot,
2311 "Don't add objects that don't participate in collection!");
2312
2313 nsXPCOMCycleCollectionParticipant* cp;
2314 ToParticipant(aRoot, &cp);
2315
2316 NoteRoot(aRoot, cp);
2317 }
2318
NS_IMETHODIMP_(void)2319 NS_IMETHODIMP_(void)
2320 CCGraphBuilder::NoteJSRoot(JSObject* aRoot)
2321 {
2322 if (JS::Zone* zone = MergeZone(JS::GCCellPtr(aRoot))) {
2323 NoteRoot(zone, mJSZoneParticipant);
2324 } else {
2325 NoteRoot(aRoot, mJSParticipant);
2326 }
2327 }
2328
NS_IMETHODIMP_(void)2329 NS_IMETHODIMP_(void)
2330 CCGraphBuilder::NoteNativeRoot(void* aRoot,
2331 nsCycleCollectionParticipant* aParticipant)
2332 {
2333 NoteRoot(aRoot, aParticipant);
2334 }
2335
NS_IMETHODIMP_(void)2336 NS_IMETHODIMP_(void)
2337 CCGraphBuilder::DescribeRefCountedNode(nsrefcnt aRefCount, const char* aObjName)
2338 {
2339 MOZ_RELEASE_ASSERT(aRefCount != 0, "CCed refcounted object has zero refcount");
2340 MOZ_RELEASE_ASSERT(aRefCount != UINT32_MAX, "CCed refcounted object has overflowing refcount");
2341
2342 mResults.mVisitedRefCounted++;
2343
2344 if (mLogger) {
2345 mLogger->NoteRefCountedObject((uint64_t)mCurrPi->mPointer, aRefCount,
2346 aObjName);
2347 }
2348
2349 mCurrPi->mRefCount = aRefCount;
2350 }
2351
NS_IMETHODIMP_(void)2352 NS_IMETHODIMP_(void)
2353 CCGraphBuilder::DescribeGCedNode(bool aIsMarked, const char* aObjName,
2354 uint64_t aCompartmentAddress)
2355 {
2356 uint32_t refCount = aIsMarked ? UINT32_MAX : 0;
2357 mResults.mVisitedGCed++;
2358
2359 if (mLogger) {
2360 mLogger->NoteGCedObject((uint64_t)mCurrPi->mPointer, aIsMarked,
2361 aObjName, aCompartmentAddress);
2362 }
2363
2364 mCurrPi->mRefCount = refCount;
2365 }
2366
NS_IMETHODIMP_(void)2367 NS_IMETHODIMP_(void)
2368 CCGraphBuilder::NoteXPCOMChild(nsISupports* aChild)
2369 {
2370 nsCString edgeName;
2371 if (WantDebugInfo()) {
2372 edgeName.Assign(mNextEdgeName);
2373 mNextEdgeName.Truncate();
2374 }
2375 if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2376 return;
2377 }
2378
2379 nsXPCOMCycleCollectionParticipant* cp;
2380 ToParticipant(aChild, &cp);
2381 if (cp && (!cp->CanSkipThis(aChild) || WantAllTraces())) {
2382 NoteChild(aChild, cp, edgeName);
2383 }
2384 }
2385
NS_IMETHODIMP_(void)2386 NS_IMETHODIMP_(void)
2387 CCGraphBuilder::NoteNativeChild(void* aChild,
2388 nsCycleCollectionParticipant* aParticipant)
2389 {
2390 nsCString edgeName;
2391 if (WantDebugInfo()) {
2392 edgeName.Assign(mNextEdgeName);
2393 mNextEdgeName.Truncate();
2394 }
2395 if (!aChild) {
2396 return;
2397 }
2398
2399 MOZ_ASSERT(aParticipant, "Need a nsCycleCollectionParticipant!");
2400 if (!aParticipant->CanSkipThis(aChild) || WantAllTraces()) {
2401 NoteChild(aChild, aParticipant, edgeName);
2402 }
2403 }
2404
NS_IMETHODIMP_(void)2405 NS_IMETHODIMP_(void)
2406 CCGraphBuilder::NoteJSChild(const JS::GCCellPtr& aChild)
2407 {
2408 if (!aChild) {
2409 return;
2410 }
2411
2412 nsCString edgeName;
2413 if (MOZ_UNLIKELY(WantDebugInfo())) {
2414 edgeName.Assign(mNextEdgeName);
2415 mNextEdgeName.Truncate();
2416 }
2417
2418 if (GCThingIsGrayCCThing(aChild) || MOZ_UNLIKELY(WantAllTraces())) {
2419 if (JS::Zone* zone = MergeZone(aChild)) {
2420 NoteChild(zone, mJSZoneParticipant, edgeName);
2421 } else {
2422 NoteChild(aChild.asCell(), mJSParticipant, edgeName);
2423 }
2424 }
2425 }
2426
NS_IMETHODIMP_(void)2427 NS_IMETHODIMP_(void)
2428 CCGraphBuilder::NoteNextEdgeName(const char* aName)
2429 {
2430 if (WantDebugInfo()) {
2431 mNextEdgeName = aName;
2432 }
2433 }
2434
2435 PtrInfo*
AddWeakMapNode(JS::GCCellPtr aNode)2436 CCGraphBuilder::AddWeakMapNode(JS::GCCellPtr aNode)
2437 {
2438 MOZ_ASSERT(aNode, "Weak map node should be non-null.");
2439
2440 if (!GCThingIsGrayCCThing(aNode) && !WantAllTraces()) {
2441 return nullptr;
2442 }
2443
2444 if (JS::Zone* zone = MergeZone(aNode)) {
2445 return AddNode(zone, mJSZoneParticipant);
2446 }
2447 return AddNode(aNode.asCell(), mJSParticipant);
2448 }
2449
2450 PtrInfo*
AddWeakMapNode(JSObject * aObject)2451 CCGraphBuilder::AddWeakMapNode(JSObject* aObject)
2452 {
2453 return AddWeakMapNode(JS::GCCellPtr(aObject));
2454 }
2455
NS_IMETHODIMP_(void)2456 NS_IMETHODIMP_(void)
2457 CCGraphBuilder::NoteWeakMapping(JSObject* aMap, JS::GCCellPtr aKey,
2458 JSObject* aKdelegate, JS::GCCellPtr aVal)
2459 {
2460 // Don't try to optimize away the entry here, as we've already attempted to
2461 // do that in TraceWeakMapping in nsXPConnect.
2462 WeakMapping* mapping = mGraph.mWeakMaps.AppendElement();
2463 mapping->mMap = aMap ? AddWeakMapNode(aMap) : nullptr;
2464 mapping->mKey = aKey ? AddWeakMapNode(aKey) : nullptr;
2465 mapping->mKeyDelegate = aKdelegate ? AddWeakMapNode(aKdelegate) : mapping->mKey;
2466 mapping->mVal = aVal ? AddWeakMapNode(aVal) : nullptr;
2467
2468 if (mLogger) {
2469 mLogger->NoteWeakMapEntry((uint64_t)aMap, aKey ? aKey.unsafeAsInteger() : 0,
2470 (uint64_t)aKdelegate,
2471 aVal ? aVal.unsafeAsInteger() : 0);
2472 }
2473 }
2474
2475 static bool
AddPurpleRoot(CCGraphBuilder & aBuilder,void * aRoot,nsCycleCollectionParticipant * aParti)2476 AddPurpleRoot(CCGraphBuilder& aBuilder, void* aRoot,
2477 nsCycleCollectionParticipant* aParti)
2478 {
2479 return aBuilder.AddPurpleRoot(aRoot, aParti);
2480 }
2481
2482 // MayHaveChild() will be false after a Traverse if the object does
2483 // not have any children the CC will visit.
2484 class ChildFinder : public nsCycleCollectionTraversalCallback
2485 {
2486 public:
ChildFinder()2487 ChildFinder() : mMayHaveChild(false)
2488 {
2489 }
2490
2491 // The logic of the Note*Child functions must mirror that of their
2492 // respective functions in CCGraphBuilder.
2493 NS_IMETHOD_(void) NoteXPCOMChild(nsISupports* aChild);
2494 NS_IMETHOD_(void) NoteNativeChild(void* aChild,
2495 nsCycleCollectionParticipant* aHelper);
2496 NS_IMETHOD_(void) NoteJSChild(const JS::GCCellPtr& aThing);
2497
DescribeRefCountedNode(nsrefcnt aRefcount,const char * aObjname)2498 NS_IMETHOD_(void) DescribeRefCountedNode(nsrefcnt aRefcount,
2499 const char* aObjname)
2500 {
2501 }
DescribeGCedNode(bool aIsMarked,const char * aObjname,uint64_t aCompartmentAddress)2502 NS_IMETHOD_(void) DescribeGCedNode(bool aIsMarked,
2503 const char* aObjname,
2504 uint64_t aCompartmentAddress)
2505 {
2506 }
NoteNextEdgeName(const char * aName)2507 NS_IMETHOD_(void) NoteNextEdgeName(const char* aName)
2508 {
2509 }
MayHaveChild()2510 bool MayHaveChild()
2511 {
2512 return mMayHaveChild;
2513 }
2514 private:
2515 bool mMayHaveChild;
2516 };
2517
NS_IMETHODIMP_(void)2518 NS_IMETHODIMP_(void)
2519 ChildFinder::NoteXPCOMChild(nsISupports* aChild)
2520 {
2521 if (!aChild || !(aChild = CanonicalizeXPCOMParticipant(aChild))) {
2522 return;
2523 }
2524 nsXPCOMCycleCollectionParticipant* cp;
2525 ToParticipant(aChild, &cp);
2526 if (cp && !cp->CanSkip(aChild, true)) {
2527 mMayHaveChild = true;
2528 }
2529 }
2530
NS_IMETHODIMP_(void)2531 NS_IMETHODIMP_(void)
2532 ChildFinder::NoteNativeChild(void* aChild,
2533 nsCycleCollectionParticipant* aHelper)
2534 {
2535 if (!aChild) {
2536 return;
2537 }
2538 MOZ_ASSERT(aHelper, "Native child must have a participant");
2539 if (!aHelper->CanSkip(aChild, true)) {
2540 mMayHaveChild = true;
2541 }
2542 }
2543
NS_IMETHODIMP_(void)2544 NS_IMETHODIMP_(void)
2545 ChildFinder::NoteJSChild(const JS::GCCellPtr& aChild)
2546 {
2547 if (aChild && JS::GCThingIsMarkedGray(aChild)) {
2548 mMayHaveChild = true;
2549 }
2550 }
2551
2552 static bool
MayHaveChild(void * aObj,nsCycleCollectionParticipant * aCp)2553 MayHaveChild(void* aObj, nsCycleCollectionParticipant* aCp)
2554 {
2555 ChildFinder cf;
2556 aCp->Traverse(aObj, cf);
2557 return cf.MayHaveChild();
2558 }
2559
2560 // JSPurpleBuffer keeps references to GCThings which might affect the
2561 // next cycle collection. It is owned only by itself and during unlink its
2562 // self reference is broken down and the object ends up killing itself.
2563 // If GC happens before CC, references to GCthings and the self reference are
2564 // removed.
2565 class JSPurpleBuffer
2566 {
~JSPurpleBuffer()2567 ~JSPurpleBuffer()
2568 {
2569 MOZ_ASSERT(mValues.IsEmpty());
2570 MOZ_ASSERT(mObjects.IsEmpty());
2571 }
2572
2573 public:
JSPurpleBuffer(RefPtr<JSPurpleBuffer> & aReferenceToThis)2574 explicit JSPurpleBuffer(RefPtr<JSPurpleBuffer>& aReferenceToThis)
2575 : mReferenceToThis(aReferenceToThis)
2576 , mValues(kSegmentSize)
2577 , mObjects(kSegmentSize)
2578 {
2579 mReferenceToThis = this;
2580 mozilla::HoldJSObjects(this);
2581 }
2582
Destroy()2583 void Destroy()
2584 {
2585 mReferenceToThis = nullptr;
2586 mValues.Clear();
2587 mObjects.Clear();
2588 mozilla::DropJSObjects(this);
2589 }
2590
2591 NS_INLINE_DECL_CYCLE_COLLECTING_NATIVE_REFCOUNTING(JSPurpleBuffer)
2592 NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_NATIVE_CLASS(JSPurpleBuffer)
2593
2594 RefPtr<JSPurpleBuffer>& mReferenceToThis;
2595
2596 // These are raw pointers instead of Heap<T> because we only need Heap<T> for
2597 // pointers which may point into the nursery. The purple buffer never contains
2598 // pointers to the nursery because nursery gcthings can never be gray and only
2599 // gray things can be inserted into the purple buffer.
2600 static const size_t kSegmentSize = 512;
2601 SegmentedVector<JS::Value, kSegmentSize, InfallibleAllocPolicy> mValues;
2602 SegmentedVector<JSObject*, kSegmentSize, InfallibleAllocPolicy> mObjects;
2603 };
2604
2605 NS_IMPL_CYCLE_COLLECTION_CLASS(JSPurpleBuffer)
2606
2607 NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(JSPurpleBuffer)
2608 tmp->Destroy();
2609 NS_IMPL_CYCLE_COLLECTION_UNLINK_END
2610
2611 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(JSPurpleBuffer)
2612 CycleCollectionNoteChild(cb, tmp, "self");
2613 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_SCRIPT_OBJECTS
2614 NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END
2615
2616 #define NS_TRACE_SEGMENTED_ARRAY(_field, _type) \
2617 { \
2618 for (auto iter = tmp->_field.Iter(); !iter.Done(); iter.Next()) { \
2619 js::gc::CallTraceCallbackOnNonHeap<_type, TraceCallbacks>( \
2620 &iter.Get(), aCallbacks, #_field, aClosure); \
2621 } \
2622 }
2623
2624 NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(JSPurpleBuffer)
2625 NS_TRACE_SEGMENTED_ARRAY(mValues, JS::Value)
2626 NS_TRACE_SEGMENTED_ARRAY(mObjects, JSObject*)
2627 NS_IMPL_CYCLE_COLLECTION_TRACE_END
2628
2629 NS_IMPL_CYCLE_COLLECTION_ROOT_NATIVE(JSPurpleBuffer, AddRef)
2630 NS_IMPL_CYCLE_COLLECTION_UNROOT_NATIVE(JSPurpleBuffer, Release)
2631
2632 class SnowWhiteKiller : public TraceCallbacks
2633 {
2634 struct SnowWhiteObject
2635 {
2636 void* mPointer;
2637 nsCycleCollectionParticipant* mParticipant;
2638 nsCycleCollectingAutoRefCnt* mRefCnt;
2639 };
2640
2641 // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
2642 static const size_t kSegmentSize = sizeof(void*) * 1024;
2643 typedef SegmentedVector<SnowWhiteObject, kSegmentSize, InfallibleAllocPolicy>
2644 ObjectsVector;
2645
2646 public:
SnowWhiteKiller(nsCycleCollector * aCollector)2647 explicit SnowWhiteKiller(nsCycleCollector* aCollector)
2648 : mCollector(aCollector)
2649 , mObjects(kSegmentSize)
2650 {
2651 MOZ_ASSERT(mCollector, "Calling SnowWhiteKiller after nsCC went away");
2652 }
2653
~SnowWhiteKiller()2654 ~SnowWhiteKiller()
2655 {
2656 for (auto iter = mObjects.Iter(); !iter.Done(); iter.Next()) {
2657 SnowWhiteObject& o = iter.Get();
2658 if (!o.mRefCnt->get() && !o.mRefCnt->IsInPurpleBuffer()) {
2659 mCollector->RemoveObjectFromGraph(o.mPointer);
2660 o.mRefCnt->stabilizeForDeletion();
2661 {
2662 JS::AutoEnterCycleCollection autocc(mCollector->Context()->Context());
2663 o.mParticipant->Trace(o.mPointer, *this, nullptr);
2664 }
2665 o.mParticipant->DeleteCycleCollectable(o.mPointer);
2666 }
2667 }
2668 }
2669
2670 void
Visit(nsPurpleBuffer & aBuffer,nsPurpleBufferEntry * aEntry)2671 Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2672 {
2673 MOZ_ASSERT(aEntry->mObject, "Null object in purple buffer");
2674 if (!aEntry->mRefCnt->get()) {
2675 void* o = aEntry->mObject;
2676 nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2677 CanonicalizeParticipant(&o, &cp);
2678 SnowWhiteObject swo = { o, cp, aEntry->mRefCnt };
2679 mObjects.InfallibleAppend(swo);
2680 aBuffer.Remove(aEntry);
2681 }
2682 }
2683
HasSnowWhiteObjects() const2684 bool HasSnowWhiteObjects() const
2685 {
2686 return !mObjects.IsEmpty();
2687 }
2688
Trace(JS::Heap<JS::Value> * aValue,const char * aName,void * aClosure) const2689 virtual void Trace(JS::Heap<JS::Value>* aValue, const char* aName,
2690 void* aClosure) const override
2691 {
2692 const JS::Value& val = aValue->unbarrieredGet();
2693 if (val.isMarkable() && ValueIsGrayCCThing(val)) {
2694 MOZ_ASSERT(!js::gc::IsInsideNursery(val.toGCThing()));
2695 mCollector->GetJSPurpleBuffer()->mValues.InfallibleAppend(val);
2696 }
2697 }
2698
Trace(JS::Heap<jsid> * aId,const char * aName,void * aClosure) const2699 virtual void Trace(JS::Heap<jsid>* aId, const char* aName,
2700 void* aClosure) const override
2701 {
2702 }
2703
AppendJSObjectToPurpleBuffer(JSObject * obj) const2704 void AppendJSObjectToPurpleBuffer(JSObject* obj) const
2705 {
2706 if (obj && JS::ObjectIsMarkedGray(obj)) {
2707 MOZ_ASSERT(JS::ObjectIsTenured(obj));
2708 mCollector->GetJSPurpleBuffer()->mObjects.InfallibleAppend(obj);
2709 }
2710 }
2711
Trace(JS::Heap<JSObject * > * aObject,const char * aName,void * aClosure) const2712 virtual void Trace(JS::Heap<JSObject*>* aObject, const char* aName,
2713 void* aClosure) const override
2714 {
2715 AppendJSObjectToPurpleBuffer(aObject->unbarrieredGet());
2716 }
2717
Trace(JSObject ** aObject,const char * aName,void * aClosure) const2718 virtual void Trace(JSObject** aObject, const char* aName,
2719 void* aClosure) const override
2720 {
2721 AppendJSObjectToPurpleBuffer(*aObject);
2722 }
2723
Trace(JS::TenuredHeap<JSObject * > * aObject,const char * aName,void * aClosure) const2724 virtual void Trace(JS::TenuredHeap<JSObject*>* aObject, const char* aName,
2725 void* aClosure) const override
2726 {
2727 AppendJSObjectToPurpleBuffer(aObject->unbarrieredGetPtr());
2728 }
2729
Trace(JS::Heap<JSString * > * aString,const char * aName,void * aClosure) const2730 virtual void Trace(JS::Heap<JSString*>* aString, const char* aName,
2731 void* aClosure) const override
2732 {
2733 }
2734
Trace(JS::Heap<JSScript * > * aScript,const char * aName,void * aClosure) const2735 virtual void Trace(JS::Heap<JSScript*>* aScript, const char* aName,
2736 void* aClosure) const override
2737 {
2738 }
2739
Trace(JS::Heap<JSFunction * > * aFunction,const char * aName,void * aClosure) const2740 virtual void Trace(JS::Heap<JSFunction*>* aFunction, const char* aName,
2741 void* aClosure) const override
2742 {
2743 }
2744
2745 private:
2746 RefPtr<nsCycleCollector> mCollector;
2747 ObjectsVector mObjects;
2748 };
2749
2750 class RemoveSkippableVisitor : public SnowWhiteKiller
2751 {
2752 public:
RemoveSkippableVisitor(nsCycleCollector * aCollector,bool aRemoveChildlessNodes,bool aAsyncSnowWhiteFreeing,CC_ForgetSkippableCallback aCb)2753 RemoveSkippableVisitor(nsCycleCollector* aCollector,
2754 bool aRemoveChildlessNodes,
2755 bool aAsyncSnowWhiteFreeing,
2756 CC_ForgetSkippableCallback aCb)
2757 : SnowWhiteKiller(aCollector)
2758 , mRemoveChildlessNodes(aRemoveChildlessNodes)
2759 , mAsyncSnowWhiteFreeing(aAsyncSnowWhiteFreeing)
2760 , mDispatchedDeferredDeletion(false)
2761 , mCallback(aCb)
2762 {
2763 }
2764
~RemoveSkippableVisitor()2765 ~RemoveSkippableVisitor()
2766 {
2767 // Note, we must call the callback before SnowWhiteKiller calls
2768 // DeleteCycleCollectable!
2769 if (mCallback) {
2770 mCallback();
2771 }
2772 if (HasSnowWhiteObjects()) {
2773 // Effectively a continuation.
2774 nsCycleCollector_dispatchDeferredDeletion(true);
2775 }
2776 }
2777
2778 void
Visit(nsPurpleBuffer & aBuffer,nsPurpleBufferEntry * aEntry)2779 Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2780 {
2781 MOZ_ASSERT(aEntry->mObject, "null mObject in purple buffer");
2782 if (!aEntry->mRefCnt->get()) {
2783 if (!mAsyncSnowWhiteFreeing) {
2784 SnowWhiteKiller::Visit(aBuffer, aEntry);
2785 } else if (!mDispatchedDeferredDeletion) {
2786 mDispatchedDeferredDeletion = true;
2787 nsCycleCollector_dispatchDeferredDeletion(false);
2788 }
2789 return;
2790 }
2791 void* o = aEntry->mObject;
2792 nsCycleCollectionParticipant* cp = aEntry->mParticipant;
2793 CanonicalizeParticipant(&o, &cp);
2794 if (aEntry->mRefCnt->IsPurple() && !cp->CanSkip(o, false) &&
2795 (!mRemoveChildlessNodes || MayHaveChild(o, cp))) {
2796 return;
2797 }
2798 aBuffer.Remove(aEntry);
2799 }
2800
2801 private:
2802 bool mRemoveChildlessNodes;
2803 bool mAsyncSnowWhiteFreeing;
2804 bool mDispatchedDeferredDeletion;
2805 CC_ForgetSkippableCallback mCallback;
2806 };
2807
2808 void
RemoveSkippable(nsCycleCollector * aCollector,bool aRemoveChildlessNodes,bool aAsyncSnowWhiteFreeing,CC_ForgetSkippableCallback aCb)2809 nsPurpleBuffer::RemoveSkippable(nsCycleCollector* aCollector,
2810 bool aRemoveChildlessNodes,
2811 bool aAsyncSnowWhiteFreeing,
2812 CC_ForgetSkippableCallback aCb)
2813 {
2814 RemoveSkippableVisitor visitor(aCollector, aRemoveChildlessNodes,
2815 aAsyncSnowWhiteFreeing, aCb);
2816 VisitEntries(visitor);
2817 }
2818
2819 bool
FreeSnowWhite(bool aUntilNoSWInPurpleBuffer)2820 nsCycleCollector::FreeSnowWhite(bool aUntilNoSWInPurpleBuffer)
2821 {
2822 CheckThreadSafety();
2823
2824 if (mFreeingSnowWhite) {
2825 return false;
2826 }
2827
2828 AutoRestore<bool> ar(mFreeingSnowWhite);
2829 mFreeingSnowWhite = true;
2830
2831 bool hadSnowWhiteObjects = false;
2832 do {
2833 SnowWhiteKiller visitor(this);
2834 mPurpleBuf.VisitEntries(visitor);
2835 hadSnowWhiteObjects = hadSnowWhiteObjects ||
2836 visitor.HasSnowWhiteObjects();
2837 if (!visitor.HasSnowWhiteObjects()) {
2838 break;
2839 }
2840 } while (aUntilNoSWInPurpleBuffer);
2841 return hadSnowWhiteObjects;
2842 }
2843
2844 void
ForgetSkippable(bool aRemoveChildlessNodes,bool aAsyncSnowWhiteFreeing)2845 nsCycleCollector::ForgetSkippable(bool aRemoveChildlessNodes,
2846 bool aAsyncSnowWhiteFreeing)
2847 {
2848 CheckThreadSafety();
2849
2850 mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
2851 if (NS_IsMainThread()) {
2852 marker.emplace("nsCycleCollector::ForgetSkippable", MarkerStackRequest::NO_STACK);
2853 }
2854
2855 // If we remove things from the purple buffer during graph building, we may
2856 // lose track of an object that was mutated during graph building.
2857 MOZ_ASSERT(IsIdle());
2858
2859 if (mJSContext) {
2860 mJSContext->PrepareForForgetSkippable();
2861 }
2862 MOZ_ASSERT(!mScanInProgress,
2863 "Don't forget skippable or free snow-white while scan is in progress.");
2864 mPurpleBuf.RemoveSkippable(this, aRemoveChildlessNodes,
2865 aAsyncSnowWhiteFreeing, mForgetSkippableCB);
2866 }
2867
2868 MOZ_NEVER_INLINE void
MarkRoots(SliceBudget & aBudget)2869 nsCycleCollector::MarkRoots(SliceBudget& aBudget)
2870 {
2871 JS::AutoAssertNoGC nogc;
2872 TimeLog timeLog;
2873 AutoRestore<bool> ar(mScanInProgress);
2874 MOZ_RELEASE_ASSERT(!mScanInProgress);
2875 mScanInProgress = true;
2876 MOZ_ASSERT(mIncrementalPhase == GraphBuildingPhase);
2877
2878 JS::AutoEnterCycleCollection autocc(Context()->Context());
2879 bool doneBuilding = mBuilder->BuildGraph(aBudget);
2880
2881 if (!doneBuilding) {
2882 timeLog.Checkpoint("MarkRoots()");
2883 return;
2884 }
2885
2886 mBuilder = nullptr;
2887 mIncrementalPhase = ScanAndCollectWhitePhase;
2888 timeLog.Checkpoint("MarkRoots()");
2889 }
2890
2891
2892 ////////////////////////////////////////////////////////////////////////
2893 // Bacon & Rajan's |ScanRoots| routine.
2894 ////////////////////////////////////////////////////////////////////////
2895
2896
2897 struct ScanBlackVisitor
2898 {
ScanBlackVisitorScanBlackVisitor2899 ScanBlackVisitor(uint32_t& aWhiteNodeCount, bool& aFailed)
2900 : mWhiteNodeCount(aWhiteNodeCount), mFailed(aFailed)
2901 {
2902 }
2903
ShouldVisitNodeScanBlackVisitor2904 bool ShouldVisitNode(PtrInfo const* aPi)
2905 {
2906 return aPi->mColor != black;
2907 }
2908
VisitNodeScanBlackVisitor2909 MOZ_NEVER_INLINE void VisitNode(PtrInfo* aPi)
2910 {
2911 if (aPi->mColor == white) {
2912 --mWhiteNodeCount;
2913 }
2914 aPi->mColor = black;
2915 }
2916
FailedScanBlackVisitor2917 void Failed()
2918 {
2919 mFailed = true;
2920 }
2921
2922 private:
2923 uint32_t& mWhiteNodeCount;
2924 bool& mFailed;
2925 };
2926
2927 static void
FloodBlackNode(uint32_t & aWhiteNodeCount,bool & aFailed,PtrInfo * aPi)2928 FloodBlackNode(uint32_t& aWhiteNodeCount, bool& aFailed, PtrInfo* aPi)
2929 {
2930 GraphWalker<ScanBlackVisitor>(ScanBlackVisitor(aWhiteNodeCount,
2931 aFailed)).Walk(aPi);
2932 MOZ_ASSERT(aPi->mColor == black || !aPi->WasTraversed(),
2933 "FloodBlackNode should make aPi black");
2934 }
2935
2936 // Iterate over the WeakMaps. If we mark anything while iterating
2937 // over the WeakMaps, we must iterate over all of the WeakMaps again.
2938 void
ScanWeakMaps()2939 nsCycleCollector::ScanWeakMaps()
2940 {
2941 bool anyChanged;
2942 bool failed = false;
2943 do {
2944 anyChanged = false;
2945 for (uint32_t i = 0; i < mGraph.mWeakMaps.Length(); i++) {
2946 WeakMapping* wm = &mGraph.mWeakMaps[i];
2947
2948 // If any of these are null, the original object was marked black.
2949 uint32_t mColor = wm->mMap ? wm->mMap->mColor : black;
2950 uint32_t kColor = wm->mKey ? wm->mKey->mColor : black;
2951 uint32_t kdColor = wm->mKeyDelegate ? wm->mKeyDelegate->mColor : black;
2952 uint32_t vColor = wm->mVal ? wm->mVal->mColor : black;
2953
2954 MOZ_ASSERT(mColor != grey, "Uncolored weak map");
2955 MOZ_ASSERT(kColor != grey, "Uncolored weak map key");
2956 MOZ_ASSERT(kdColor != grey, "Uncolored weak map key delegate");
2957 MOZ_ASSERT(vColor != grey, "Uncolored weak map value");
2958
2959 if (mColor == black && kColor != black && kdColor == black) {
2960 FloodBlackNode(mWhiteNodeCount, failed, wm->mKey);
2961 anyChanged = true;
2962 }
2963
2964 if (mColor == black && kColor == black && vColor != black) {
2965 FloodBlackNode(mWhiteNodeCount, failed, wm->mVal);
2966 anyChanged = true;
2967 }
2968 }
2969 } while (anyChanged);
2970
2971 if (failed) {
2972 MOZ_ASSERT(false, "Ran out of memory in ScanWeakMaps");
2973 CC_TELEMETRY(_OOM, true);
2974 }
2975 }
2976
2977 // Flood black from any objects in the purple buffer that are in the CC graph.
2978 class PurpleScanBlackVisitor
2979 {
2980 public:
PurpleScanBlackVisitor(CCGraph & aGraph,nsCycleCollectorLogger * aLogger,uint32_t & aCount,bool & aFailed)2981 PurpleScanBlackVisitor(CCGraph& aGraph, nsCycleCollectorLogger* aLogger,
2982 uint32_t& aCount, bool& aFailed)
2983 : mGraph(aGraph), mLogger(aLogger), mCount(aCount), mFailed(aFailed)
2984 {
2985 }
2986
2987 void
Visit(nsPurpleBuffer & aBuffer,nsPurpleBufferEntry * aEntry)2988 Visit(nsPurpleBuffer& aBuffer, nsPurpleBufferEntry* aEntry)
2989 {
2990 MOZ_ASSERT(aEntry->mObject,
2991 "Entries with null mObject shouldn't be in the purple buffer.");
2992 MOZ_ASSERT(aEntry->mRefCnt->get() != 0,
2993 "Snow-white objects shouldn't be in the purple buffer.");
2994
2995 void* obj = aEntry->mObject;
2996 if (!aEntry->mParticipant) {
2997 obj = CanonicalizeXPCOMParticipant(static_cast<nsISupports*>(obj));
2998 MOZ_ASSERT(obj, "Don't add objects that don't participate in collection!");
2999 }
3000
3001 PtrInfo* pi = mGraph.FindNode(obj);
3002 if (!pi) {
3003 return;
3004 }
3005 MOZ_ASSERT(pi->mParticipant, "No dead objects should be in the purple buffer.");
3006 if (MOZ_UNLIKELY(mLogger)) {
3007 mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
3008 }
3009 if (pi->mColor == black) {
3010 return;
3011 }
3012 FloodBlackNode(mCount, mFailed, pi);
3013 }
3014
3015 private:
3016 CCGraph& mGraph;
3017 RefPtr<nsCycleCollectorLogger> mLogger;
3018 uint32_t& mCount;
3019 bool& mFailed;
3020 };
3021
3022 // Objects that have been stored somewhere since the start of incremental graph building must
3023 // be treated as live for this cycle collection, because we may not have accurate information
3024 // about who holds references to them.
3025 void
ScanIncrementalRoots()3026 nsCycleCollector::ScanIncrementalRoots()
3027 {
3028 TimeLog timeLog;
3029
3030 // Reference counted objects:
3031 // We cleared the purple buffer at the start of the current ICC, so if a
3032 // refcounted object is purple, it may have been AddRef'd during the current
3033 // ICC. (It may also have only been released.) If that is the case, we cannot
3034 // be sure that the set of things pointing to the object in the CC graph
3035 // is accurate. Therefore, for safety, we treat any purple objects as being
3036 // live during the current CC. We don't remove anything from the purple
3037 // buffer here, so these objects will be suspected and freed in the next CC
3038 // if they are garbage.
3039 bool failed = false;
3040 PurpleScanBlackVisitor purpleScanBlackVisitor(mGraph, mLogger,
3041 mWhiteNodeCount, failed);
3042 mPurpleBuf.VisitEntries(purpleScanBlackVisitor);
3043 timeLog.Checkpoint("ScanIncrementalRoots::fix purple");
3044
3045 bool hasJSContext = !!mJSContext;
3046 nsCycleCollectionParticipant* jsParticipant =
3047 hasJSContext ? mJSContext->GCThingParticipant() : nullptr;
3048 nsCycleCollectionParticipant* zoneParticipant =
3049 hasJSContext ? mJSContext->ZoneParticipant() : nullptr;
3050 bool hasLogger = !!mLogger;
3051
3052 NodePool::Enumerator etor(mGraph.mNodes);
3053 while (!etor.IsDone()) {
3054 PtrInfo* pi = etor.GetNext();
3055
3056 // As an optimization, if an object has already been determined to be live,
3057 // don't consider it further. We can't do this if there is a listener,
3058 // because the listener wants to know the complete set of incremental roots.
3059 if (pi->mColor == black && MOZ_LIKELY(!hasLogger)) {
3060 continue;
3061 }
3062
3063 // Garbage collected objects:
3064 // If a GCed object was added to the graph with a refcount of zero, and is
3065 // now marked black by the GC, it was probably gray before and was exposed
3066 // to active JS, so it may have been stored somewhere, so it needs to be
3067 // treated as live.
3068 if (pi->IsGrayJS() && MOZ_LIKELY(hasJSContext)) {
3069 // If the object is still marked gray by the GC, nothing could have gotten
3070 // hold of it, so it isn't an incremental root.
3071 if (pi->mParticipant == jsParticipant) {
3072 JS::GCCellPtr ptr(pi->mPointer, JS::GCThingTraceKind(pi->mPointer));
3073 if (GCThingIsGrayCCThing(ptr)) {
3074 continue;
3075 }
3076 } else if (pi->mParticipant == zoneParticipant) {
3077 JS::Zone* zone = static_cast<JS::Zone*>(pi->mPointer);
3078 if (js::ZoneGlobalsAreAllGray(zone)) {
3079 continue;
3080 }
3081 } else {
3082 MOZ_ASSERT(false, "Non-JS thing with 0 refcount? Treating as live.");
3083 }
3084 } else if (!pi->mParticipant && pi->WasTraversed()) {
3085 // Dead traversed refcounted objects:
3086 // If the object was traversed, it must have been alive at the start of
3087 // the CC, and thus had a positive refcount. It is dead now, so its
3088 // refcount must have decreased at some point during the CC. Therefore,
3089 // it would be in the purple buffer if it wasn't dead, so treat it as an
3090 // incremental root.
3091 //
3092 // This should not cause leaks because as the object died it should have
3093 // released anything it held onto, which will add them to the purple
3094 // buffer, which will cause them to be considered in the next CC.
3095 } else {
3096 continue;
3097 }
3098
3099 // At this point, pi must be an incremental root.
3100
3101 // If there's a listener, tell it about this root. We don't bother with the
3102 // optimization of skipping the Walk() if pi is black: it will just return
3103 // without doing anything and there's no need to make this case faster.
3104 if (MOZ_UNLIKELY(hasLogger) && pi->mPointer) {
3105 // Dead objects aren't logged. See bug 1031370.
3106 mLogger->NoteIncrementalRoot((uint64_t)pi->mPointer);
3107 }
3108
3109 FloodBlackNode(mWhiteNodeCount, failed, pi);
3110 }
3111
3112 timeLog.Checkpoint("ScanIncrementalRoots::fix nodes");
3113
3114 if (failed) {
3115 NS_ASSERTION(false, "Ran out of memory in ScanIncrementalRoots");
3116 CC_TELEMETRY(_OOM, true);
3117 }
3118 }
3119
3120 // Mark nodes white and make sure their refcounts are ok.
3121 // No nodes are marked black during this pass to ensure that refcount
3122 // checking is run on all nodes not marked black by ScanIncrementalRoots.
3123 void
ScanWhiteNodes(bool aFullySynchGraphBuild)3124 nsCycleCollector::ScanWhiteNodes(bool aFullySynchGraphBuild)
3125 {
3126 NodePool::Enumerator nodeEnum(mGraph.mNodes);
3127 while (!nodeEnum.IsDone()) {
3128 PtrInfo* pi = nodeEnum.GetNext();
3129 if (pi->mColor == black) {
3130 // Incremental roots can be in a nonsensical state, so don't
3131 // check them. This will miss checking nodes that are merely
3132 // reachable from incremental roots.
3133 MOZ_ASSERT(!aFullySynchGraphBuild,
3134 "In a synch CC, no nodes should be marked black early on.");
3135 continue;
3136 }
3137 MOZ_ASSERT(pi->mColor == grey);
3138
3139 if (!pi->WasTraversed()) {
3140 // This node was deleted before it was traversed, so there's no reason
3141 // to look at it.
3142 MOZ_ASSERT(!pi->mParticipant, "Live nodes should all have been traversed");
3143 continue;
3144 }
3145
3146 if (pi->mInternalRefs == pi->mRefCount || pi->IsGrayJS()) {
3147 pi->mColor = white;
3148 ++mWhiteNodeCount;
3149 continue;
3150 }
3151
3152 if (pi->mInternalRefs > pi->mRefCount) {
3153 #ifdef MOZ_CRASHREPORTER
3154 const char* piName = "Unknown";
3155 if (pi->mParticipant) {
3156 piName = pi->mParticipant->ClassName();
3157 }
3158 nsPrintfCString msg("More references to an object than its refcount, for class %s", piName);
3159 CrashReporter::AnnotateCrashReport(NS_LITERAL_CSTRING("CycleCollector"), msg);
3160 #endif
3161 MOZ_CRASH();
3162 }
3163
3164 // This node will get marked black in the next pass.
3165 }
3166 }
3167
3168 // Any remaining grey nodes that haven't already been deleted must be alive,
3169 // so mark them and their children black. Any nodes that are black must have
3170 // already had their children marked black, so there's no need to look at them
3171 // again. This pass may turn some white nodes to black.
3172 void
ScanBlackNodes()3173 nsCycleCollector::ScanBlackNodes()
3174 {
3175 bool failed = false;
3176 NodePool::Enumerator nodeEnum(mGraph.mNodes);
3177 while (!nodeEnum.IsDone()) {
3178 PtrInfo* pi = nodeEnum.GetNext();
3179 if (pi->mColor == grey && pi->WasTraversed()) {
3180 FloodBlackNode(mWhiteNodeCount, failed, pi);
3181 }
3182 }
3183
3184 if (failed) {
3185 NS_ASSERTION(false, "Ran out of memory in ScanBlackNodes");
3186 CC_TELEMETRY(_OOM, true);
3187 }
3188 }
3189
3190 void
ScanRoots(bool aFullySynchGraphBuild)3191 nsCycleCollector::ScanRoots(bool aFullySynchGraphBuild)
3192 {
3193 JS::AutoAssertNoGC nogc;
3194 AutoRestore<bool> ar(mScanInProgress);
3195 MOZ_RELEASE_ASSERT(!mScanInProgress);
3196 mScanInProgress = true;
3197 mWhiteNodeCount = 0;
3198 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3199
3200 JS::AutoEnterCycleCollection autocc(Context()->Context());
3201
3202 if (!aFullySynchGraphBuild) {
3203 ScanIncrementalRoots();
3204 }
3205
3206 TimeLog timeLog;
3207 ScanWhiteNodes(aFullySynchGraphBuild);
3208 timeLog.Checkpoint("ScanRoots::ScanWhiteNodes");
3209
3210 ScanBlackNodes();
3211 timeLog.Checkpoint("ScanRoots::ScanBlackNodes");
3212
3213 // Scanning weak maps must be done last.
3214 ScanWeakMaps();
3215 timeLog.Checkpoint("ScanRoots::ScanWeakMaps");
3216
3217 if (mLogger) {
3218 mLogger->BeginResults();
3219
3220 NodePool::Enumerator etor(mGraph.mNodes);
3221 while (!etor.IsDone()) {
3222 PtrInfo* pi = etor.GetNext();
3223 if (!pi->WasTraversed()) {
3224 continue;
3225 }
3226 switch (pi->mColor) {
3227 case black:
3228 if (!pi->IsGrayJS() && !pi->IsBlackJS() &&
3229 pi->mInternalRefs != pi->mRefCount) {
3230 mLogger->DescribeRoot((uint64_t)pi->mPointer,
3231 pi->mInternalRefs);
3232 }
3233 break;
3234 case white:
3235 mLogger->DescribeGarbage((uint64_t)pi->mPointer);
3236 break;
3237 case grey:
3238 MOZ_ASSERT(false, "All traversed objects should be black or white");
3239 break;
3240 }
3241 }
3242
3243 mLogger->End();
3244 mLogger = nullptr;
3245 timeLog.Checkpoint("ScanRoots::listener");
3246 }
3247 }
3248
3249
3250 ////////////////////////////////////////////////////////////////////////
3251 // Bacon & Rajan's |CollectWhite| routine, somewhat modified.
3252 ////////////////////////////////////////////////////////////////////////
3253
3254 bool
CollectWhite()3255 nsCycleCollector::CollectWhite()
3256 {
3257 // Explanation of "somewhat modified": we have no way to collect the
3258 // set of whites "all at once", we have to ask each of them to drop
3259 // their outgoing links and assume this will cause the garbage cycle
3260 // to *mostly* self-destruct (except for the reference we continue
3261 // to hold).
3262 //
3263 // To do this "safely" we must make sure that the white nodes we're
3264 // operating on are stable for the duration of our operation. So we
3265 // make 3 sets of calls to language runtimes:
3266 //
3267 // - Root(whites), which should pin the whites in memory.
3268 // - Unlink(whites), which drops outgoing links on each white.
3269 // - Unroot(whites), which returns the whites to normal GC.
3270
3271 // Segments are 4 KiB on 32-bit and 8 KiB on 64-bit.
3272 static const size_t kSegmentSize = sizeof(void*) * 1024;
3273 SegmentedVector<PtrInfo*, kSegmentSize, InfallibleAllocPolicy>
3274 whiteNodes(kSegmentSize);
3275 TimeLog timeLog;
3276
3277 MOZ_ASSERT(mIncrementalPhase == ScanAndCollectWhitePhase);
3278
3279 uint32_t numWhiteNodes = 0;
3280 uint32_t numWhiteGCed = 0;
3281 uint32_t numWhiteJSZones = 0;
3282
3283 {
3284 JS::AutoAssertNoGC nogc;
3285 bool hasJSContext = !!mJSContext;
3286 nsCycleCollectionParticipant* zoneParticipant =
3287 hasJSContext ? mJSContext->ZoneParticipant() : nullptr;
3288
3289 NodePool::Enumerator etor(mGraph.mNodes);
3290 while (!etor.IsDone()) {
3291 PtrInfo* pinfo = etor.GetNext();
3292 if (pinfo->mColor == white && pinfo->mParticipant) {
3293 if (pinfo->IsGrayJS()) {
3294 MOZ_ASSERT(mJSContext);
3295 ++numWhiteGCed;
3296 JS::Zone* zone;
3297 if (MOZ_UNLIKELY(pinfo->mParticipant == zoneParticipant)) {
3298 ++numWhiteJSZones;
3299 zone = static_cast<JS::Zone*>(pinfo->mPointer);
3300 } else {
3301 JS::GCCellPtr ptr(pinfo->mPointer, JS::GCThingTraceKind(pinfo->mPointer));
3302 zone = JS::GetTenuredGCThingZone(ptr);
3303 }
3304 mJSContext->AddZoneWaitingForGC(zone);
3305 } else {
3306 whiteNodes.InfallibleAppend(pinfo);
3307 pinfo->mParticipant->Root(pinfo->mPointer);
3308 ++numWhiteNodes;
3309 }
3310 }
3311 }
3312 }
3313
3314 mResults.mFreedRefCounted += numWhiteNodes;
3315 mResults.mFreedGCed += numWhiteGCed;
3316 mResults.mFreedJSZones += numWhiteJSZones;
3317
3318 timeLog.Checkpoint("CollectWhite::Root");
3319
3320 if (mBeforeUnlinkCB) {
3321 mBeforeUnlinkCB();
3322 timeLog.Checkpoint("CollectWhite::BeforeUnlinkCB");
3323 }
3324
3325 // Unlink() can trigger a GC, so do not touch any JS or anything
3326 // else not in whiteNodes after here.
3327
3328 for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3329 PtrInfo* pinfo = iter.Get();
3330 MOZ_ASSERT(pinfo->mParticipant,
3331 "Unlink shouldn't see objects removed from graph.");
3332 pinfo->mParticipant->Unlink(pinfo->mPointer);
3333 #ifdef DEBUG
3334 if (mJSContext) {
3335 mJSContext->AssertNoObjectsToTrace(pinfo->mPointer);
3336 }
3337 #endif
3338 }
3339 timeLog.Checkpoint("CollectWhite::Unlink");
3340
3341 JS::AutoAssertNoGC nogc;
3342 for (auto iter = whiteNodes.Iter(); !iter.Done(); iter.Next()) {
3343 PtrInfo* pinfo = iter.Get();
3344 MOZ_ASSERT(pinfo->mParticipant,
3345 "Unroot shouldn't see objects removed from graph.");
3346 pinfo->mParticipant->Unroot(pinfo->mPointer);
3347 }
3348 timeLog.Checkpoint("CollectWhite::Unroot");
3349
3350 nsCycleCollector_dispatchDeferredDeletion(false, true);
3351 timeLog.Checkpoint("CollectWhite::dispatchDeferredDeletion");
3352
3353 mIncrementalPhase = CleanupPhase;
3354
3355 return numWhiteNodes > 0 || numWhiteGCed > 0 || numWhiteJSZones > 0;
3356 }
3357
3358
3359 ////////////////////////
3360 // Memory reporting
3361 ////////////////////////
3362
MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)3363 MOZ_DEFINE_MALLOC_SIZE_OF(CycleCollectorMallocSizeOf)
3364
3365 NS_IMETHODIMP
3366 nsCycleCollector::CollectReports(nsIHandleReportCallback* aHandleReport,
3367 nsISupports* aData, bool aAnonymize)
3368 {
3369 size_t objectSize, graphSize, purpleBufferSize;
3370 SizeOfIncludingThis(CycleCollectorMallocSizeOf,
3371 &objectSize, &graphSize,
3372 &purpleBufferSize);
3373
3374 if (objectSize > 0) {
3375 MOZ_COLLECT_REPORT(
3376 "explicit/cycle-collector/collector-object", KIND_HEAP, UNITS_BYTES,
3377 objectSize,
3378 "Memory used for the cycle collector object itself.");
3379 }
3380
3381 if (graphSize > 0) {
3382 MOZ_COLLECT_REPORT(
3383 "explicit/cycle-collector/graph", KIND_HEAP, UNITS_BYTES,
3384 graphSize,
3385 "Memory used for the cycle collector's graph. This should be zero when "
3386 "the collector is idle.");
3387 }
3388
3389 if (purpleBufferSize > 0) {
3390 MOZ_COLLECT_REPORT(
3391 "explicit/cycle-collector/purple-buffer", KIND_HEAP, UNITS_BYTES,
3392 purpleBufferSize,
3393 "Memory used for the cycle collector's purple buffer.");
3394 }
3395
3396 return NS_OK;
3397 };
3398
3399
3400 ////////////////////////////////////////////////////////////////////////
3401 // Collector implementation
3402 ////////////////////////////////////////////////////////////////////////
3403
nsCycleCollector()3404 nsCycleCollector::nsCycleCollector() :
3405 mActivelyCollecting(false),
3406 mFreeingSnowWhite(false),
3407 mScanInProgress(false),
3408 mJSContext(nullptr),
3409 mIncrementalPhase(IdlePhase),
3410 #ifdef DEBUG
3411 mThread(NS_GetCurrentThread()),
3412 #endif
3413 mWhiteNodeCount(0),
3414 mBeforeUnlinkCB(nullptr),
3415 mForgetSkippableCB(nullptr),
3416 mUnmergedNeeded(0),
3417 mMergedInARow(0)
3418 {
3419 }
3420
~nsCycleCollector()3421 nsCycleCollector::~nsCycleCollector()
3422 {
3423 UnregisterWeakMemoryReporter(this);
3424 }
3425
3426 void
RegisterJSContext(CycleCollectedJSContext * aJSContext)3427 nsCycleCollector::RegisterJSContext(CycleCollectedJSContext* aJSContext)
3428 {
3429 MOZ_RELEASE_ASSERT(!mJSContext, "Multiple registrations of JS context in cycle collector");
3430 mJSContext = aJSContext;
3431
3432 if (!NS_IsMainThread()) {
3433 return;
3434 }
3435
3436 // We can't register as a reporter in nsCycleCollector() because that runs
3437 // before the memory reporter manager is initialized. So we do it here
3438 // instead.
3439 RegisterWeakMemoryReporter(this);
3440 }
3441
3442 void
ForgetJSContext()3443 nsCycleCollector::ForgetJSContext()
3444 {
3445 MOZ_RELEASE_ASSERT(mJSContext, "Forgetting JS context in cycle collector before a JS context was registered");
3446 mJSContext = nullptr;
3447 }
3448
3449 #ifdef DEBUG
3450 static bool
HasParticipant(void * aPtr,nsCycleCollectionParticipant * aParti)3451 HasParticipant(void* aPtr, nsCycleCollectionParticipant* aParti)
3452 {
3453 if (aParti) {
3454 return true;
3455 }
3456
3457 nsXPCOMCycleCollectionParticipant* xcp;
3458 ToParticipant(static_cast<nsISupports*>(aPtr), &xcp);
3459 return xcp != nullptr;
3460 }
3461 #endif
3462
3463 MOZ_ALWAYS_INLINE void
Suspect(void * aPtr,nsCycleCollectionParticipant * aParti,nsCycleCollectingAutoRefCnt * aRefCnt)3464 nsCycleCollector::Suspect(void* aPtr, nsCycleCollectionParticipant* aParti,
3465 nsCycleCollectingAutoRefCnt* aRefCnt)
3466 {
3467 CheckThreadSafety();
3468
3469 // Don't call AddRef or Release of a CCed object in a Traverse() method.
3470 MOZ_ASSERT(!mScanInProgress, "Attempted to call Suspect() while a scan was in progress");
3471
3472 if (MOZ_UNLIKELY(mScanInProgress)) {
3473 return;
3474 }
3475
3476 MOZ_ASSERT(aPtr, "Don't suspect null pointers");
3477
3478 MOZ_ASSERT(HasParticipant(aPtr, aParti),
3479 "Suspected nsISupports pointer must QI to nsXPCOMCycleCollectionParticipant");
3480
3481 mPurpleBuf.Put(aPtr, aParti, aRefCnt);
3482 }
3483
3484 void
CheckThreadSafety()3485 nsCycleCollector::CheckThreadSafety()
3486 {
3487 #ifdef DEBUG
3488 nsIThread* currentThread = NS_GetCurrentThread();
3489 // XXXkhuey we can be called so late in shutdown that NS_GetCurrentThread
3490 // returns null (after the thread manager has shut down)
3491 MOZ_ASSERT(mThread == currentThread || !currentThread);
3492 #endif
3493 }
3494
3495 // The cycle collector uses the mark bitmap to discover what JS objects
3496 // were reachable only from XPConnect roots that might participate in
3497 // cycles. We ask the JS context whether we need to force a GC before
3498 // this CC. It returns true on startup (before the mark bits have been set),
3499 // and also when UnmarkGray has run out of stack. We also force GCs on shut
3500 // down to collect cycles involving both DOM and JS.
3501 void
FixGrayBits(bool aForceGC,TimeLog & aTimeLog)3502 nsCycleCollector::FixGrayBits(bool aForceGC, TimeLog& aTimeLog)
3503 {
3504 CheckThreadSafety();
3505
3506 if (!mJSContext) {
3507 return;
3508 }
3509
3510 if (!aForceGC) {
3511 mJSContext->FixWeakMappingGrayBits();
3512 aTimeLog.Checkpoint("FixWeakMappingGrayBits");
3513
3514 bool needGC = !mJSContext->AreGCGrayBitsValid();
3515 // Only do a telemetry ping for non-shutdown CCs.
3516 CC_TELEMETRY(_NEED_GC, needGC);
3517 if (!needGC) {
3518 return;
3519 }
3520 mResults.mForcedGC = true;
3521 }
3522
3523 mJSContext->GarbageCollect(aForceGC ? JS::gcreason::SHUTDOWN_CC :
3524 JS::gcreason::CC_FORCED);
3525 aTimeLog.Checkpoint("FixGrayBits GC");
3526 }
3527
3528 bool
IsIncrementalGCInProgress()3529 nsCycleCollector::IsIncrementalGCInProgress()
3530 {
3531 return mJSContext && JS::IsIncrementalGCInProgress(mJSContext->Context());
3532 }
3533
3534 void
FinishAnyIncrementalGCInProgress()3535 nsCycleCollector::FinishAnyIncrementalGCInProgress()
3536 {
3537 if (IsIncrementalGCInProgress()) {
3538 NS_WARNING("Finishing incremental GC in progress during CC");
3539 JS::PrepareForIncrementalGC(mJSContext->Context());
3540 JS::FinishIncrementalGC(mJSContext->Context(), JS::gcreason::CC_FORCED);
3541 }
3542 }
3543
3544 void
CleanupAfterCollection()3545 nsCycleCollector::CleanupAfterCollection()
3546 {
3547 TimeLog timeLog;
3548 MOZ_ASSERT(mIncrementalPhase == CleanupPhase);
3549 mGraph.Clear();
3550 timeLog.Checkpoint("CleanupAfterCollection::mGraph.Clear()");
3551
3552 uint32_t interval =
3553 (uint32_t)((TimeStamp::Now() - mCollectionStart).ToMilliseconds());
3554 #ifdef COLLECT_TIME_DEBUG
3555 printf("cc: total cycle collector time was %ums in %u slices\n", interval,
3556 mResults.mNumSlices);
3557 printf("cc: visited %u ref counted and %u GCed objects, freed %d ref counted and %d GCed objects",
3558 mResults.mVisitedRefCounted, mResults.mVisitedGCed,
3559 mResults.mFreedRefCounted, mResults.mFreedGCed);
3560 uint32_t numVisited = mResults.mVisitedRefCounted + mResults.mVisitedGCed;
3561 if (numVisited > 1000) {
3562 uint32_t numFreed = mResults.mFreedRefCounted + mResults.mFreedGCed;
3563 printf(" (%d%%)", 100 * numFreed / numVisited);
3564 }
3565 printf(".\ncc: \n");
3566 #endif
3567
3568 CC_TELEMETRY( , interval);
3569 CC_TELEMETRY(_VISITED_REF_COUNTED, mResults.mVisitedRefCounted);
3570 CC_TELEMETRY(_VISITED_GCED, mResults.mVisitedGCed);
3571 CC_TELEMETRY(_COLLECTED, mWhiteNodeCount);
3572 timeLog.Checkpoint("CleanupAfterCollection::telemetry");
3573
3574 if (mJSContext) {
3575 mJSContext->FinalizeDeferredThings(mResults.mAnyManual
3576 ? CycleCollectedJSContext::FinalizeNow
3577 : CycleCollectedJSContext::FinalizeIncrementally);
3578 mJSContext->EndCycleCollectionCallback(mResults);
3579 timeLog.Checkpoint("CleanupAfterCollection::EndCycleCollectionCallback()");
3580 }
3581 mIncrementalPhase = IdlePhase;
3582 }
3583
3584 void
ShutdownCollect()3585 nsCycleCollector::ShutdownCollect()
3586 {
3587 FinishAnyIncrementalGCInProgress();
3588
3589 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3590 uint32_t i;
3591 for (i = 0; i < DEFAULT_SHUTDOWN_COLLECTIONS; ++i) {
3592 if (!Collect(ShutdownCC, unlimitedBudget, nullptr)) {
3593 break;
3594 }
3595 }
3596 NS_WARNING_ASSERTION(i < NORMAL_SHUTDOWN_COLLECTIONS, "Extra shutdown CC");
3597 }
3598
3599 static void
PrintPhase(const char * aPhase)3600 PrintPhase(const char* aPhase)
3601 {
3602 #ifdef DEBUG_PHASES
3603 printf("cc: begin %s on %s\n", aPhase,
3604 NS_IsMainThread() ? "mainthread" : "worker");
3605 #endif
3606 }
3607
3608 bool
Collect(ccType aCCType,SliceBudget & aBudget,nsICycleCollectorListener * aManualListener,bool aPreferShorterSlices)3609 nsCycleCollector::Collect(ccType aCCType,
3610 SliceBudget& aBudget,
3611 nsICycleCollectorListener* aManualListener,
3612 bool aPreferShorterSlices)
3613 {
3614 CheckThreadSafety();
3615
3616 // This can legitimately happen in a few cases. See bug 383651.
3617 if (mActivelyCollecting || mFreeingSnowWhite) {
3618 return false;
3619 }
3620 mActivelyCollecting = true;
3621
3622 MOZ_ASSERT(!IsIncrementalGCInProgress());
3623
3624 mozilla::Maybe<mozilla::AutoGlobalTimelineMarker> marker;
3625 if (NS_IsMainThread()) {
3626 marker.emplace("nsCycleCollector::Collect", MarkerStackRequest::NO_STACK);
3627 }
3628
3629 bool startedIdle = IsIdle();
3630 bool collectedAny = false;
3631
3632 // If the CC started idle, it will call BeginCollection, which
3633 // will do FreeSnowWhite, so it doesn't need to be done here.
3634 if (!startedIdle) {
3635 TimeLog timeLog;
3636 FreeSnowWhite(true);
3637 timeLog.Checkpoint("Collect::FreeSnowWhite");
3638 }
3639
3640 if (aCCType != SliceCC) {
3641 mResults.mAnyManual = true;
3642 }
3643
3644 ++mResults.mNumSlices;
3645
3646 bool continueSlice = aBudget.isUnlimited() || !aPreferShorterSlices;
3647 do {
3648 switch (mIncrementalPhase) {
3649 case IdlePhase:
3650 PrintPhase("BeginCollection");
3651 BeginCollection(aCCType, aManualListener);
3652 break;
3653 case GraphBuildingPhase:
3654 PrintPhase("MarkRoots");
3655 MarkRoots(aBudget);
3656
3657 // Only continue this slice if we're running synchronously or the
3658 // next phase will probably be short, to reduce the max pause for this
3659 // collection.
3660 // (There's no need to check if we've finished graph building, because
3661 // if we haven't, we've already exceeded our budget, and will finish
3662 // this slice anyways.)
3663 continueSlice = aBudget.isUnlimited() ||
3664 (mResults.mNumSlices < 3 && !aPreferShorterSlices);
3665 break;
3666 case ScanAndCollectWhitePhase:
3667 // We do ScanRoots and CollectWhite in a single slice to ensure
3668 // that we won't unlink a live object if a weak reference is
3669 // promoted to a strong reference after ScanRoots has finished.
3670 // See bug 926533.
3671 PrintPhase("ScanRoots");
3672 ScanRoots(startedIdle);
3673 PrintPhase("CollectWhite");
3674 collectedAny = CollectWhite();
3675 break;
3676 case CleanupPhase:
3677 PrintPhase("CleanupAfterCollection");
3678 CleanupAfterCollection();
3679 continueSlice = false;
3680 break;
3681 }
3682 if (continueSlice) {
3683 // Force SliceBudget::isOverBudget to check the time.
3684 aBudget.step(SliceBudget::CounterReset);
3685 continueSlice = !aBudget.isOverBudget();
3686 }
3687 } while (continueSlice);
3688
3689 // Clear mActivelyCollecting here to ensure that a recursive call to
3690 // Collect() does something.
3691 mActivelyCollecting = false;
3692
3693 if (aCCType != SliceCC && !startedIdle) {
3694 // We were in the middle of an incremental CC (using its own listener).
3695 // Somebody has forced a CC, so after having finished out the current CC,
3696 // run the CC again using the new listener.
3697 MOZ_ASSERT(IsIdle());
3698 if (Collect(aCCType, aBudget, aManualListener)) {
3699 collectedAny = true;
3700 }
3701 }
3702
3703 MOZ_ASSERT_IF(aCCType != SliceCC, IsIdle());
3704
3705 return collectedAny;
3706 }
3707
3708 // Any JS objects we have in the graph could die when we GC, but we
3709 // don't want to abandon the current CC, because the graph contains
3710 // information about purple roots. So we synchronously finish off
3711 // the current CC.
3712 void
PrepareForGarbageCollection()3713 nsCycleCollector::PrepareForGarbageCollection()
3714 {
3715 if (IsIdle()) {
3716 MOZ_ASSERT(mGraph.IsEmpty(), "Non-empty graph when idle");
3717 MOZ_ASSERT(!mBuilder, "Non-null builder when idle");
3718 if (mJSPurpleBuffer) {
3719 mJSPurpleBuffer->Destroy();
3720 }
3721 return;
3722 }
3723
3724 FinishAnyCurrentCollection();
3725 }
3726
3727 void
FinishAnyCurrentCollection()3728 nsCycleCollector::FinishAnyCurrentCollection()
3729 {
3730 if (IsIdle()) {
3731 return;
3732 }
3733
3734 SliceBudget unlimitedBudget = SliceBudget::unlimited();
3735 PrintPhase("FinishAnyCurrentCollection");
3736 // Use SliceCC because we only want to finish the CC in progress.
3737 Collect(SliceCC, unlimitedBudget, nullptr);
3738
3739 // It is only okay for Collect() to have failed to finish the
3740 // current CC if we're reentering the CC at some point past
3741 // graph building. We need to be past the point where the CC will
3742 // look at JS objects so that it is safe to GC.
3743 MOZ_ASSERT(IsIdle() ||
3744 (mActivelyCollecting && mIncrementalPhase != GraphBuildingPhase),
3745 "Reentered CC during graph building");
3746 }
3747
3748 // Don't merge too many times in a row, and do at least a minimum
3749 // number of unmerged CCs in a row.
3750 static const uint32_t kMinConsecutiveUnmerged = 3;
3751 static const uint32_t kMaxConsecutiveMerged = 3;
3752
3753 bool
ShouldMergeZones(ccType aCCType)3754 nsCycleCollector::ShouldMergeZones(ccType aCCType)
3755 {
3756 if (!mJSContext) {
3757 return false;
3758 }
3759
3760 MOZ_ASSERT(mUnmergedNeeded <= kMinConsecutiveUnmerged);
3761 MOZ_ASSERT(mMergedInARow <= kMaxConsecutiveMerged);
3762
3763 if (mMergedInARow == kMaxConsecutiveMerged) {
3764 MOZ_ASSERT(mUnmergedNeeded == 0);
3765 mUnmergedNeeded = kMinConsecutiveUnmerged;
3766 }
3767
3768 if (mUnmergedNeeded > 0) {
3769 mUnmergedNeeded--;
3770 mMergedInARow = 0;
3771 return false;
3772 }
3773
3774 if (aCCType == SliceCC && mJSContext->UsefulToMergeZones()) {
3775 mMergedInARow++;
3776 return true;
3777 } else {
3778 mMergedInARow = 0;
3779 return false;
3780 }
3781 }
3782
3783 void
BeginCollection(ccType aCCType,nsICycleCollectorListener * aManualListener)3784 nsCycleCollector::BeginCollection(ccType aCCType,
3785 nsICycleCollectorListener* aManualListener)
3786 {
3787 TimeLog timeLog;
3788 MOZ_ASSERT(IsIdle());
3789
3790 mCollectionStart = TimeStamp::Now();
3791
3792 if (mJSContext) {
3793 mJSContext->BeginCycleCollectionCallback();
3794 timeLog.Checkpoint("BeginCycleCollectionCallback()");
3795 }
3796
3797 bool isShutdown = (aCCType == ShutdownCC);
3798
3799 // Set up the listener for this CC.
3800 MOZ_ASSERT_IF(isShutdown, !aManualListener);
3801 MOZ_ASSERT(!mLogger, "Forgot to clear a previous listener?");
3802
3803 if (aManualListener) {
3804 aManualListener->AsLogger(getter_AddRefs(mLogger));
3805 }
3806
3807 aManualListener = nullptr;
3808 if (!mLogger && mParams.LogThisCC(isShutdown)) {
3809 mLogger = new nsCycleCollectorLogger();
3810 if (mParams.AllTracesThisCC(isShutdown)) {
3811 mLogger->SetAllTraces();
3812 }
3813 }
3814
3815 // On a WantAllTraces CC, force a synchronous global GC to prevent
3816 // hijinks from ForgetSkippable and compartmental GCs.
3817 bool forceGC = isShutdown || (mLogger && mLogger->IsAllTraces());
3818
3819 // BeginCycleCollectionCallback() might have started an IGC, and we need
3820 // to finish it before we run FixGrayBits.
3821 FinishAnyIncrementalGCInProgress();
3822 timeLog.Checkpoint("Pre-FixGrayBits finish IGC");
3823
3824 FixGrayBits(forceGC, timeLog);
3825
3826 FreeSnowWhite(true);
3827 timeLog.Checkpoint("BeginCollection FreeSnowWhite");
3828
3829 if (mLogger && NS_FAILED(mLogger->Begin())) {
3830 mLogger = nullptr;
3831 }
3832
3833 // FreeSnowWhite could potentially have started an IGC, which we need
3834 // to finish before we look at any JS roots.
3835 FinishAnyIncrementalGCInProgress();
3836 timeLog.Checkpoint("Post-FreeSnowWhite finish IGC");
3837
3838 // Set up the data structures for building the graph.
3839 JS::AutoAssertNoGC nogc;
3840 JS::AutoEnterCycleCollection autocc(mJSContext->Context());
3841 mGraph.Init();
3842 mResults.Init();
3843 mResults.mAnyManual = (aCCType != SliceCC);
3844 bool mergeZones = ShouldMergeZones(aCCType);
3845 mResults.mMergedZones = mergeZones;
3846
3847 MOZ_ASSERT(!mBuilder, "Forgot to clear mBuilder");
3848 mBuilder = new CCGraphBuilder(mGraph, mResults, mJSContext, mLogger,
3849 mergeZones);
3850 timeLog.Checkpoint("BeginCollection prepare graph builder");
3851
3852 if (mJSContext) {
3853 mJSContext->TraverseRoots(*mBuilder);
3854 timeLog.Checkpoint("mJSContext->TraverseRoots()");
3855 }
3856
3857 AutoRestore<bool> ar(mScanInProgress);
3858 MOZ_RELEASE_ASSERT(!mScanInProgress);
3859 mScanInProgress = true;
3860 mPurpleBuf.SelectPointers(*mBuilder);
3861 timeLog.Checkpoint("SelectPointers()");
3862
3863 mBuilder->DoneAddingRoots();
3864 mIncrementalPhase = GraphBuildingPhase;
3865 }
3866
3867 uint32_t
SuspectedCount()3868 nsCycleCollector::SuspectedCount()
3869 {
3870 CheckThreadSafety();
3871 return mPurpleBuf.Count();
3872 }
3873
3874 void
Shutdown(bool aDoCollect)3875 nsCycleCollector::Shutdown(bool aDoCollect)
3876 {
3877 CheckThreadSafety();
3878
3879 // Always delete snow white objects.
3880 FreeSnowWhite(true);
3881
3882 if (aDoCollect) {
3883 ShutdownCollect();
3884 }
3885 }
3886
3887 void
RemoveObjectFromGraph(void * aObj)3888 nsCycleCollector::RemoveObjectFromGraph(void* aObj)
3889 {
3890 if (IsIdle()) {
3891 return;
3892 }
3893
3894 mGraph.RemoveObjectFromMap(aObj);
3895 }
3896
3897 void
SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,size_t * aObjectSize,size_t * aGraphSize,size_t * aPurpleBufferSize) const3898 nsCycleCollector::SizeOfIncludingThis(mozilla::MallocSizeOf aMallocSizeOf,
3899 size_t* aObjectSize,
3900 size_t* aGraphSize,
3901 size_t* aPurpleBufferSize) const
3902 {
3903 *aObjectSize = aMallocSizeOf(this);
3904
3905 *aGraphSize = mGraph.SizeOfExcludingThis(aMallocSizeOf);
3906
3907 *aPurpleBufferSize = mPurpleBuf.SizeOfExcludingThis(aMallocSizeOf);
3908
3909 // These fields are deliberately not measured:
3910 // - mJSContext: because it's non-owning and measured by JS reporters.
3911 // - mParams: because it only contains scalars.
3912 }
3913
3914 JSPurpleBuffer*
GetJSPurpleBuffer()3915 nsCycleCollector::GetJSPurpleBuffer()
3916 {
3917 if (!mJSPurpleBuffer) {
3918 // The Release call here confuses the GC analysis.
3919 JS::AutoSuppressGCAnalysis nogc;
3920 // JSPurpleBuffer keeps itself alive, but we need to create it in such way
3921 // that it ends up in the normal purple buffer. That happens when
3922 // nsRefPtr goes out of the scope and calls Release.
3923 RefPtr<JSPurpleBuffer> pb = new JSPurpleBuffer(mJSPurpleBuffer);
3924 }
3925 return mJSPurpleBuffer;
3926 }
3927
3928 ////////////////////////////////////////////////////////////////////////
3929 // Module public API (exported in nsCycleCollector.h)
3930 // Just functions that redirect into the singleton, once it's built.
3931 ////////////////////////////////////////////////////////////////////////
3932
3933 void
nsCycleCollector_registerJSContext(CycleCollectedJSContext * aCx)3934 nsCycleCollector_registerJSContext(CycleCollectedJSContext* aCx)
3935 {
3936 CollectorData* data = sCollectorData.get();
3937
3938 // We should have started the cycle collector by now.
3939 MOZ_ASSERT(data);
3940 MOZ_ASSERT(data->mCollector);
3941 // But we shouldn't already have a context.
3942 MOZ_ASSERT(!data->mContext);
3943
3944 data->mContext = aCx;
3945 data->mCollector->RegisterJSContext(aCx);
3946 }
3947
3948 void
nsCycleCollector_forgetJSContext()3949 nsCycleCollector_forgetJSContext()
3950 {
3951 CollectorData* data = sCollectorData.get();
3952
3953 // We should have started the cycle collector by now.
3954 MOZ_ASSERT(data);
3955 // And we shouldn't have already forgotten our context.
3956 MOZ_ASSERT(data->mContext);
3957
3958 // But it may have shutdown already.
3959 if (data->mCollector) {
3960 data->mCollector->ForgetJSContext();
3961 data->mContext = nullptr;
3962 } else {
3963 data->mContext = nullptr;
3964 delete data;
3965 sCollectorData.set(nullptr);
3966 }
3967 }
3968
3969 /* static */ CycleCollectedJSContext*
Get()3970 CycleCollectedJSContext::Get()
3971 {
3972 CollectorData* data = sCollectorData.get();
3973 if (data) {
3974 return data->mContext;
3975 }
3976 return nullptr;
3977 }
3978
3979 MOZ_NEVER_INLINE static void
SuspectAfterShutdown(void * aPtr,nsCycleCollectionParticipant * aCp,nsCycleCollectingAutoRefCnt * aRefCnt,bool * aShouldDelete)3980 SuspectAfterShutdown(void* aPtr, nsCycleCollectionParticipant* aCp,
3981 nsCycleCollectingAutoRefCnt* aRefCnt,
3982 bool* aShouldDelete)
3983 {
3984 if (aRefCnt->get() == 0) {
3985 if (!aShouldDelete) {
3986 // The CC is shut down, so we can't be in the middle of an ICC.
3987 CanonicalizeParticipant(&aPtr, &aCp);
3988 aRefCnt->stabilizeForDeletion();
3989 aCp->DeleteCycleCollectable(aPtr);
3990 } else {
3991 *aShouldDelete = true;
3992 }
3993 } else {
3994 // Make sure we'll get called again.
3995 aRefCnt->RemoveFromPurpleBuffer();
3996 }
3997 }
3998
3999 void
NS_CycleCollectorSuspect3(void * aPtr,nsCycleCollectionParticipant * aCp,nsCycleCollectingAutoRefCnt * aRefCnt,bool * aShouldDelete)4000 NS_CycleCollectorSuspect3(void* aPtr, nsCycleCollectionParticipant* aCp,
4001 nsCycleCollectingAutoRefCnt* aRefCnt,
4002 bool* aShouldDelete)
4003 {
4004 CollectorData* data = sCollectorData.get();
4005
4006 // We should have started the cycle collector by now.
4007 MOZ_ASSERT(data);
4008
4009 if (MOZ_LIKELY(data->mCollector)) {
4010 data->mCollector->Suspect(aPtr, aCp, aRefCnt);
4011 return;
4012 }
4013 SuspectAfterShutdown(aPtr, aCp, aRefCnt, aShouldDelete);
4014 }
4015
4016 uint32_t
nsCycleCollector_suspectedCount()4017 nsCycleCollector_suspectedCount()
4018 {
4019 CollectorData* data = sCollectorData.get();
4020
4021 // We should have started the cycle collector by now.
4022 MOZ_ASSERT(data);
4023
4024 if (!data->mCollector) {
4025 return 0;
4026 }
4027
4028 return data->mCollector->SuspectedCount();
4029 }
4030
4031 bool
nsCycleCollector_init()4032 nsCycleCollector_init()
4033 {
4034 #ifdef DEBUG
4035 static bool sInitialized;
4036
4037 MOZ_ASSERT(NS_IsMainThread(), "Wrong thread!");
4038 MOZ_ASSERT(!sInitialized, "Called twice!?");
4039 sInitialized = true;
4040 #endif
4041
4042 return sCollectorData.init();
4043 }
4044
4045 void
nsCycleCollector_startup()4046 nsCycleCollector_startup()
4047 {
4048 if (sCollectorData.get()) {
4049 MOZ_CRASH();
4050 }
4051
4052 CollectorData* data = new CollectorData;
4053 data->mCollector = new nsCycleCollector();
4054 data->mContext = nullptr;
4055
4056 sCollectorData.set(data);
4057 }
4058
4059 void
nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB)4060 nsCycleCollector_setBeforeUnlinkCallback(CC_BeforeUnlinkCallback aCB)
4061 {
4062 CollectorData* data = sCollectorData.get();
4063
4064 // We should have started the cycle collector by now.
4065 MOZ_ASSERT(data);
4066 MOZ_ASSERT(data->mCollector);
4067
4068 data->mCollector->SetBeforeUnlinkCallback(aCB);
4069 }
4070
4071 void
nsCycleCollector_setForgetSkippableCallback(CC_ForgetSkippableCallback aCB)4072 nsCycleCollector_setForgetSkippableCallback(CC_ForgetSkippableCallback aCB)
4073 {
4074 CollectorData* data = sCollectorData.get();
4075
4076 // We should have started the cycle collector by now.
4077 MOZ_ASSERT(data);
4078 MOZ_ASSERT(data->mCollector);
4079
4080 data->mCollector->SetForgetSkippableCallback(aCB);
4081 }
4082
4083 void
nsCycleCollector_forgetSkippable(bool aRemoveChildlessNodes,bool aAsyncSnowWhiteFreeing)4084 nsCycleCollector_forgetSkippable(bool aRemoveChildlessNodes,
4085 bool aAsyncSnowWhiteFreeing)
4086 {
4087 CollectorData* data = sCollectorData.get();
4088
4089 // We should have started the cycle collector by now.
4090 MOZ_ASSERT(data);
4091 MOZ_ASSERT(data->mCollector);
4092
4093 PROFILER_LABEL("nsCycleCollector", "forgetSkippable",
4094 js::ProfileEntry::Category::CC);
4095
4096 TimeLog timeLog;
4097 data->mCollector->ForgetSkippable(aRemoveChildlessNodes,
4098 aAsyncSnowWhiteFreeing);
4099 timeLog.Checkpoint("ForgetSkippable()");
4100 }
4101
4102 void
nsCycleCollector_dispatchDeferredDeletion(bool aContinuation,bool aPurge)4103 nsCycleCollector_dispatchDeferredDeletion(bool aContinuation, bool aPurge)
4104 {
4105 CycleCollectedJSContext* cx = CycleCollectedJSContext::Get();
4106 if (cx) {
4107 cx->DispatchDeferredDeletion(aContinuation, aPurge);
4108 }
4109 }
4110
4111 bool
nsCycleCollector_doDeferredDeletion()4112 nsCycleCollector_doDeferredDeletion()
4113 {
4114 CollectorData* data = sCollectorData.get();
4115
4116 // We should have started the cycle collector by now.
4117 MOZ_ASSERT(data);
4118 MOZ_ASSERT(data->mCollector);
4119 MOZ_ASSERT(data->mContext);
4120
4121 return data->mCollector->FreeSnowWhite(false);
4122 }
4123
4124 already_AddRefed<nsICycleCollectorLogSink>
nsCycleCollector_createLogSink()4125 nsCycleCollector_createLogSink()
4126 {
4127 nsCOMPtr<nsICycleCollectorLogSink> sink = new nsCycleCollectorLogSinkToFile();
4128 return sink.forget();
4129 }
4130
4131 void
nsCycleCollector_collect(nsICycleCollectorListener * aManualListener)4132 nsCycleCollector_collect(nsICycleCollectorListener* aManualListener)
4133 {
4134 CollectorData* data = sCollectorData.get();
4135
4136 // We should have started the cycle collector by now.
4137 MOZ_ASSERT(data);
4138 MOZ_ASSERT(data->mCollector);
4139
4140 PROFILER_LABEL("nsCycleCollector", "collect",
4141 js::ProfileEntry::Category::CC);
4142
4143 SliceBudget unlimitedBudget = SliceBudget::unlimited();
4144 data->mCollector->Collect(ManualCC, unlimitedBudget, aManualListener);
4145 }
4146
4147 void
nsCycleCollector_collectSlice(SliceBudget & budget,bool aPreferShorterSlices)4148 nsCycleCollector_collectSlice(SliceBudget& budget,
4149 bool aPreferShorterSlices)
4150 {
4151 CollectorData* data = sCollectorData.get();
4152
4153 // We should have started the cycle collector by now.
4154 MOZ_ASSERT(data);
4155 MOZ_ASSERT(data->mCollector);
4156
4157 PROFILER_LABEL("nsCycleCollector", "collectSlice",
4158 js::ProfileEntry::Category::CC);
4159
4160 data->mCollector->Collect(SliceCC, budget, nullptr, aPreferShorterSlices);
4161 }
4162
4163 void
nsCycleCollector_prepareForGarbageCollection()4164 nsCycleCollector_prepareForGarbageCollection()
4165 {
4166 CollectorData* data = sCollectorData.get();
4167
4168 MOZ_ASSERT(data);
4169
4170 if (!data->mCollector) {
4171 return;
4172 }
4173
4174 data->mCollector->PrepareForGarbageCollection();
4175 }
4176
4177 void
nsCycleCollector_finishAnyCurrentCollection()4178 nsCycleCollector_finishAnyCurrentCollection()
4179 {
4180 CollectorData* data = sCollectorData.get();
4181
4182 MOZ_ASSERT(data);
4183
4184 if (!data->mCollector) {
4185 return;
4186 }
4187
4188 data->mCollector->FinishAnyCurrentCollection();
4189 }
4190
4191 void
nsCycleCollector_shutdown(bool aDoCollect)4192 nsCycleCollector_shutdown(bool aDoCollect)
4193 {
4194 CollectorData* data = sCollectorData.get();
4195
4196 if (data) {
4197 MOZ_ASSERT(data->mCollector);
4198 PROFILER_LABEL("nsCycleCollector", "shutdown",
4199 js::ProfileEntry::Category::CC);
4200
4201 data->mCollector->Shutdown(aDoCollect);
4202 data->mCollector = nullptr;
4203 if (data->mContext) {
4204 // Run any remaining tasks that may have been enqueued via
4205 // RunInStableState during the final cycle collection.
4206 data->mContext->ProcessStableStateQueue();
4207 }
4208 if (!data->mContext) {
4209 delete data;
4210 sCollectorData.set(nullptr);
4211 }
4212 }
4213 }
4214