Repo created

This commit is contained in:
Fr4nz D13trich 2025-11-22 14:04:28 +01:00
parent 81b91f4139
commit f8c34fa5ee
22732 changed files with 4815320 additions and 2 deletions

View file

@ -0,0 +1,3 @@
danakj@chromium.org
dcheng@chromium.org
vmpstr@chromium.org

View file

@ -0,0 +1,363 @@
# base/containers library
[TOC]
## What goes here
This directory contains some STL-like containers.
Things should be moved here that are generally applicable across the code base.
Don't add things here just because you need them in one place and think others
may someday want something similar. You can put specialized containers in
your component's directory and we can promote them here later if we feel there
is broad applicability.
### Design and naming
Fundamental [//base principles](../README.md#design-and-naming) apply, i.e.:
Containers should adhere as closely to STL as possible. Functions and behaviors
not present in STL should only be added when they are related to the specific
data structure implemented by the container.
For STL-like containers our policy is that they should use STL-like naming even
when it may conflict with the style guide. So functions and class names should
be lower case with underscores. Non-STL-like classes and functions should use
Google naming. Be sure to use the base namespace.
## Map and set selection
### Usage advice
* Generally avoid `std::unordered_set` and `std::unordered_map`. In the common
case, query performance is unlikely to be sufficiently higher than
`std::map` to make a difference, insert performance is slightly worse, and
the memory overhead is high. This makes sense mostly for large tables where
you expect a lot of lookups.
* Most maps and sets in Chrome are small and contain objects that can be moved
efficiently. In this case, consider `base::flat_map` and `base::flat_set`.
You need to be aware of the maximum expected size of the container since
individual inserts and deletes are O(n), giving O(n^2) construction time for
the entire map. But because it avoids mallocs in most cases, inserts are
better or comparable to other containers even for several dozen items, and
efficiently-moved types are unlikely to have performance problems for most
cases until you have hundreds of items. If your container can be constructed
in one shot, the constructor from vector gives O(n log n) construction times
and it should be strictly better than a `std::map`.
Conceptually inserting a range of n elements into a `base::flat_map` or
`base::flat_set` behaves as if insert() was called for each individually
element. Thus in case the input range contains repeated elements, only the
first one of these duplicates will be inserted into the container. This
behaviour applies to construction from a range as well.
* `base::small_map` has better runtime memory usage without the poor mutation
performance of large containers that `base::flat_map` has. But this
advantage is partially offset by additional code size. Prefer in cases where
you make many objects so that the code/heap tradeoff is good.
* Use `std::map` and `std::set` if you can't decide. Even if they're not
great, they're unlikely to be bad or surprising.
### Map and set details
Sizes are on 64-bit platforms. Stable iterators aren't invalidated when the
container is mutated.
| Container | Empty size | Per-item overhead | Stable iterators? |
|:------------------------------------------ |:--------------------- |:----------------- |:----------------- |
| `std::map`, `std::set` | 16 bytes | 32 bytes | Yes |
| `std::unordered_map`, `std::unordered_set` | 128 bytes | 16 - 24 bytes | No |
| `base::flat_map`, `base::flat_set` | 24 bytes | 0 (see notes) | No |
| `base::small_map` | 24 bytes (see notes) | 32 bytes | No |
**Takeaways:** `std::unordered_map` and `std::unordered_set` have high
overhead for small container sizes, so prefer these only for larger workloads.
Code size comparisons for a block of code (see appendix) on Windows using
strings as keys.
| Container | Code size |
|:-------------------- |:---------- |
| `std::unordered_map` | 1646 bytes |
| `std::map` | 1759 bytes |
| `base::flat_map` | 1872 bytes |
| `base::small_map` | 2410 bytes |
**Takeaways:** `base::small_map` generates more code because of the inlining of
both brute-force and red-black tree searching. This makes it less attractive
for random one-off uses. But if your code is called frequently, the runtime
memory benefits will be more important. The code sizes of the other maps are
close enough it's not worth worrying about.
### std::map and std::set
A red-black tree. Each inserted item requires the memory allocation of a node
on the heap. Each node contains a left pointer, a right pointer, a parent
pointer, and a "color" for the red-black tree (32 bytes per item on 64-bit
platforms).
### std::unordered\_map and std::unordered\_set
A hash table. Implemented on Windows as a `std::vector` + `std::list` and in libc++
as the equivalent of a `std::vector` + a `std::forward_list`. Both implementations
allocate an 8-entry hash table (containing iterators into the list) on
initialization, and grow to 64 entries once 8 items are inserted. Above 64
items, the size doubles every time the load factor exceeds 1.
The empty size is `sizeof(std::unordered_map)` = 64 + the initial hash table
size which is 8 pointers. The per-item overhead in the table above counts the
list node (2 pointers on Windows, 1 pointer in libc++), plus amortizes the hash
table assuming a 0.5 load factor on average.
In a microbenchmark on Windows, inserts of 1M integers into a
`std::unordered_set` took 1.07x the time of `std::set`, and queries took 0.67x
the time of `std::set`. For a typical 4-entry set (the statistical mode of map
sizes in the browser), query performance is identical to `std::set` and
`base::flat_set`. On ARM, `std::unordered_set` performance can be worse because
integer division to compute the bucket is slow, and a few "less than" operations
can be faster than computing a hash depending on the key type. The takeaway is
that you should not default to using unordered maps because "they're faster."
### base::flat\_map and base::flat\_set
A sorted `std::vector`. Seached via binary search, inserts in the middle require
moving elements to make room. Good cache locality. For large objects and large
set sizes, `std::vector`'s doubling-when-full strategy can waste memory.
Supports efficient construction from a vector of items which avoids the O(n^2)
insertion time of each element separately.
The per-item overhead will depend on the underlying `std::vector`'s reallocation
strategy and the memory access pattern. Assuming items are being linearly added,
one would expect it to be 3/4 full, so per-item overhead will be 0.25 *
sizeof(T).
`flat_set` and `flat_map` support a notion of transparent comparisons.
Therefore you can, for example, lookup `base::StringPiece` in a set of
`std::strings` without constructing a temporary `std::string`. This
functionality is based on C++14 extensions to the `std::set`/`std::map`
interface.
You can find more information about transparent comparisons in [the `less<void>`
documentation](https://en.cppreference.com/w/cpp/utility/functional/less_void).
Example, smart pointer set:
```cpp
// Declare a type alias using base::UniquePtrComparator.
template <typename T>
using UniquePtrSet = base::flat_set<std::unique_ptr<T>,
base::UniquePtrComparator>;
// ...
// Collect data.
std::vector<std::unique_ptr<int>> ptr_vec;
ptr_vec.reserve(5);
std::generate_n(std::back_inserter(ptr_vec), 5, []{
return std::make_unique<int>(0);
});
// Construct a set.
UniquePtrSet<int> ptr_set(std::move(ptr_vec));
// Use raw pointers to lookup keys.
int* ptr = ptr_set.begin()->get();
EXPECT_TRUE(ptr_set.find(ptr) == ptr_set.begin());
```
Example `flat_map<std::string, int>`:
```cpp
base::flat_map<std::string, int> str_to_int({{"a", 1}, {"c", 2},{"b", 2}});
// Does not construct temporary strings.
str_to_int.find("c")->second = 3;
str_to_int.erase("c");
EXPECT_EQ(str_to_int.end(), str_to_int.find("c")->second);
// NOTE: This does construct a temporary string. This happens since if the
// item is not in the container, then it needs to be constructed, which is
// something that transparent comparators don't have to guarantee.
str_to_int["c"] = 3;
```
### base::small\_map
A small inline buffer that is brute-force searched that overflows into a full
`std::map` or `std::unordered_map`. This gives the memory benefit of
`base::flat_map` for small data sizes without the degenerate insertion
performance for large container sizes.
Since instantiations require both code for a `std::map` and a brute-force search
of the inline container, plus a fancy iterator to cover both cases, code size
is larger.
The initial size in the above table is assuming a very small inline table. The
actual size will be `sizeof(int) + min(sizeof(std::map), sizeof(T) *
inline_size)`.
## Deque
### Usage advice
Chromium code should always use `base::circular_deque` or `base::queue` in
preference to `std::deque` or `std::queue` due to memory usage and platform
variation.
The `base::circular_deque` implementation (and the `base::queue` which uses it)
provide performance consistent across platforms that better matches most
programmer's expectations on performance (it doesn't waste as much space as
libc++ and doesn't do as many heap allocations as MSVC). It also generates less
code tham `std::queue`: using it across the code base saves several hundred
kilobytes.
Since `base::deque` does not have stable iterators and it will move the objects
it contains, it may not be appropriate for all uses. If you need these,
consider using a `std::list` which will provide constant time insert and erase.
### std::deque and std::queue
The implementation of `std::deque` varies considerably which makes it hard to
reason about. All implementations use a sequence of data blocks referenced by
an array of pointers. The standard guarantees random access, amortized
constant operations at the ends, and linear mutations in the middle.
In Microsoft's implementation, each block is the smaller of 16 bytes or the
size of the contained element. This means in practice that every expansion of
the deque of non-trivial classes requires a heap allocation. libc++ (on Android
and Mac) uses 4K blocks which eliminates the problem of many heap allocations,
but generally wastes a large amount of space (an Android analysis revealed more
than 2.5MB wasted space from deque alone, resulting in some optimizations).
libstdc++ uses an intermediate-size 512-byte buffer.
Microsoft's implementation never shrinks the deque capacity, so the capacity
will always be the maximum number of elements ever contained. libstdc++
deallocates blocks as they are freed. libc++ keeps up to two empty blocks.
### base::circular_deque and base::queue
A deque implemented as a circular buffer in an array. The underlying array will
grow like a `std::vector` while the beginning and end of the deque will move
around. The items will wrap around the underlying buffer so the storage will
not be contiguous, but fast random access iterators are still possible.
When the underlying buffer is filled, it will be reallocated and the constents
moved (like a `std::vector`). The underlying buffer will be shrunk if there is
too much wasted space (_unlike_ a `std::vector`). As a result, iterators are
not stable across mutations.
## Stack
`std::stack` is like `std::queue` in that it is a wrapper around an underlying
container. The default container is `std::deque` so everything from the deque
section applies.
Chromium provides `base/containers/stack.h` which defines `base::stack` that
should be used in preference to `std::stack`. This changes the underlying
container to `base::circular_deque`. The result will be very similar to
manually specifying a `std::vector` for the underlying implementation except
that the storage will shrink when it gets too empty (vector will never
reallocate to a smaller size).
Watch out: with some stack usage patterns it's easy to depend on unstable
behavior:
```cpp
base::stack<Foo> stack;
for (...) {
Foo& current = stack.top();
DoStuff(); // May call stack.push(), say if writing a parser.
current.done = true; // Current may reference deleted item!
}
```
## Safety
Code throughout Chromium, running at any level of privilege, may directly or
indirectly depend on these containers. Much calling code implicitly or
explicitly assumes that these containers are safe, and won't corrupt memory.
Unfortunately, [such assumptions have not always proven
true](https://bugs.chromium.org/p/chromium/issues/detail?id=817982).
Therefore, we are making an effort to ensure basic safety in these classes so
that callers' assumptions are true. In particular, we are adding bounds checks,
arithmetic overflow checks, and checks for internal invariants to the base
containers where necessary. Here, safety means that the implementation will
`CHECK`.
As of 8 August 2018, we have added checks to the following classes:
- `base::StringPiece`
- `base::span`
- `base::Optional`
- `base::RingBuffer`
- `base::small_map`
Ultimately, all base containers will have these checks.
### Safety, completeness, and efficiency
Safety checks can affect performance at the micro-scale, although they do not
always. On a larger scale, if we can have confidence that these fundamental
classes and templates are minimally safe, we can sometimes avoid the security
requirement to sandbox code that (for example) processes untrustworthy inputs.
Sandboxing is a relatively heavyweight response to memory safety problems, and
in our experience not all callers can afford to pay it.
(However, where affordable, privilege separation and reduction remain Chrome
Security Team's first approach to a variety of safety and security problems.)
One can also imagine that the safety checks should be passed on to callers who
require safety. There are several problems with that approach:
- Not all authors of all call sites will always
- know when they need safety
- remember to write the checks
- write the checks correctly
- write the checks maximally efficiently, considering
- space
- time
- object code size
- These classes typically do not document themselves as being unsafe
- Some call sites have their requirements change over time
- Code that gets moved from a low-privilege process into a high-privilege
process
- Code that changes from accepting inputs from only trustworthy sources to
accepting inputs from all sources
- Putting the checks in every call site results in strictly larger object code
than centralizing them in the callee
Therefore, the minimal checks that we are adding to these base classes are the
most efficient and effective way to achieve the beginning of the safety that we
need. (Note that we cannot account for undefined behavior in callers.)
## Appendix
### Code for map code size comparison
This just calls insert and query a number of times, with `printf`s that prevent
things from being dead-code eliminated.
```cpp
TEST(Foo, Bar) {
base::small_map<std::map<std::string, Flubber>> foo;
foo.insert(std::make_pair("foo", Flubber(8, "bar")));
foo.insert(std::make_pair("bar", Flubber(8, "bar")));
foo.insert(std::make_pair("foo1", Flubber(8, "bar")));
foo.insert(std::make_pair("bar1", Flubber(8, "bar")));
foo.insert(std::make_pair("foo", Flubber(8, "bar")));
foo.insert(std::make_pair("bar", Flubber(8, "bar")));
auto found = foo.find("asdf");
printf("Found is %d\n", (int)(found == foo.end()));
found = foo.find("foo");
printf("Found is %d\n", (int)(found == foo.end()));
found = foo.find("bar");
printf("Found is %d\n", (int)(found == foo.end()));
found = foo.find("asdfhf");
printf("Found is %d\n", (int)(found == foo.end()));
found = foo.find("bar1");
printf("Found is %d\n", (int)(found == foo.end()));
}
```

View file

@ -0,0 +1,55 @@
// Copyright 2014 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_ADAPTERS_H_
#define BASE_CONTAINERS_ADAPTERS_H_
#include <stddef.h>
#include <iterator>
#include <utility>
#include "base/macros.h"
namespace base {
namespace internal {
// Internal adapter class for implementing base::Reversed.
template <typename T>
class ReversedAdapter {
public:
using Iterator = decltype(std::rbegin(std::declval<T&>()));
explicit ReversedAdapter(T& t) : t_(t) {}
ReversedAdapter(const ReversedAdapter& ra) : t_(ra.t_) {}
Iterator begin() const { return std::rbegin(t_); }
Iterator end() const { return std::rend(t_); }
private:
T& t_;
DISALLOW_ASSIGN(ReversedAdapter);
};
} // namespace internal
// Reversed returns a container adapter usable in a range-based "for" statement
// for iterating a reversible container in reverse order.
//
// Example:
//
// std::vector<int> v = ...;
// for (int i : base::Reversed(v)) {
// // iterates through v from back to front
// }
template <typename T>
internal::ReversedAdapter<T> Reversed(T& t) {
return internal::ReversedAdapter<T>(t);
}
} // namespace base
#endif // BASE_CONTAINERS_ADAPTERS_H_

View file

@ -0,0 +1,145 @@
// Copyright 2019 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_BUFFER_ITERATOR_H_
#define BASE_CONTAINERS_BUFFER_ITERATOR_H_
#include <type_traits>
#include "base/bit_cast.h"
#include "base/containers/span.h"
#include "base/numerics/checked_math.h"
namespace base {
// BufferIterator is a bounds-checked container utility to access variable-
// length, heterogeneous structures contained within a buffer. If the data are
// homogeneous, use base::span<> instead.
//
// After being created with a weakly-owned buffer, BufferIterator returns
// pointers to structured data within the buffer. After each method call that
// returns data in the buffer, the iterator position is advanced by the byte
// size of the object (or span of objects) returned. If there are not enough
// bytes remaining in the buffer to return the requested object(s), a nullptr
// or empty span is returned.
//
// This class is similar to base::Pickle, which should be preferred for
// serializing to disk. Pickle versions its header and does not support writing
// structures, which are problematic for serialization due to struct padding and
// version shear concerns.
//
// Example usage:
//
// std::vector<uint8_t> buffer(4096);
// if (!ReadSomeData(&buffer, buffer.size())) {
// LOG(ERROR) << "Failed to read data.";
// return false;
// }
//
// BufferIterator<uint8_t> iterator(buffer);
// uint32_t* num_items = iterator.Object<uint32_t>();
// if (!num_items) {
// LOG(ERROR) << "No num_items field."
// return false;
// }
//
// base::span<const item_struct> items =
// iterator.Span<item_struct>(*num_items);
// if (items.size() != *num_items) {
// LOG(ERROR) << "Not enough items.";
// return false;
// }
//
// // ... validate the objects in |items|.
template <typename B>
class BufferIterator {
public:
static_assert(std::is_same<std::remove_const_t<B>, char>::value ||
std::is_same<std::remove_const_t<B>, unsigned char>::value,
"Underlying buffer type must be char-type.");
BufferIterator() {}
BufferIterator(B* data, size_t size)
: BufferIterator(make_span(data, size)) {}
explicit BufferIterator(span<B> buffer)
: buffer_(buffer), remaining_(buffer) {}
~BufferIterator() {}
// Returns a pointer to a mutable structure T in the buffer at the current
// position. On success, the iterator position is advanced by sizeof(T). If
// there are not sizeof(T) bytes remaining in the buffer, returns nullptr.
template <typename T,
typename =
typename std::enable_if_t<std::is_trivially_copyable<T>::value>>
T* MutableObject() {
size_t size = sizeof(T);
size_t next_position;
if (!CheckAdd(position(), size).AssignIfValid(&next_position))
return nullptr;
if (next_position > total_size())
return nullptr;
T* t = bit_cast<T*>(remaining_.data());
remaining_ = remaining_.subspan(size);
return t;
}
// Returns a const pointer to an object of type T in the buffer at the current
// position.
template <typename T,
typename =
typename std::enable_if_t<std::is_trivially_copyable<T>::value>>
const T* Object() {
return MutableObject<const T>();
}
// Returns a span of |count| T objects in the buffer at the current position.
// On success, the iterator position is advanced by |sizeof(T) * count|. If
// there are not enough bytes remaining in the buffer to fulfill the request,
// returns an empty span.
template <typename T,
typename =
typename std::enable_if_t<std::is_trivially_copyable<T>::value>>
span<T> MutableSpan(size_t count) {
size_t size;
if (!CheckMul(sizeof(T), count).AssignIfValid(&size))
return span<T>();
size_t next_position;
if (!CheckAdd(position(), size).AssignIfValid(&next_position))
return span<T>();
if (next_position > total_size())
return span<T>();
auto result = span<T>(bit_cast<T*>(remaining_.data()), count);
remaining_ = remaining_.subspan(size);
return result;
}
// Returns a span to |count| const objects of type T in the buffer at the
// current position.
template <typename T,
typename =
typename std::enable_if_t<std::is_trivially_copyable<T>::value>>
span<const T> Span(size_t count) {
return MutableSpan<const T>(count);
}
// Resets the iterator position to the absolute offset |to|.
void Seek(size_t to) { remaining_ = buffer_.subspan(to); }
// Returns the total size of the underlying buffer.
size_t total_size() { return buffer_.size(); }
// Returns the current position in the buffer.
size_t position() { return buffer_.size_bytes() - remaining_.size_bytes(); }
private:
// The original buffer that the iterator was constructed with.
const span<B> buffer_;
// A subspan of |buffer_| containing the remaining bytes to iterate over.
span<B> remaining_;
// Copy and assign allowed.
};
} // namespace base
#endif // BASE_CONTAINERS_BUFFER_ITERATOR_H_

View file

@ -0,0 +1,209 @@
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_CHECKED_ITERATORS_H_
#define BASE_CONTAINERS_CHECKED_ITERATORS_H_
#include <iterator>
#include <memory>
#include <type_traits>
#include "base/containers/util.h"
#include "base/logging.h"
namespace base {
template <typename T>
class CheckedContiguousIterator {
public:
using difference_type = std::ptrdiff_t;
using value_type = std::remove_cv_t<T>;
using pointer = T*;
using reference = T&;
using iterator_category = std::random_access_iterator_tag;
// Required for converting constructor below.
template <typename U>
friend class CheckedContiguousIterator;
constexpr CheckedContiguousIterator() = default;
constexpr CheckedContiguousIterator(T* start, const T* end)
: CheckedContiguousIterator(start, start, end) {}
constexpr CheckedContiguousIterator(const T* start, T* current, const T* end)
: start_(start), current_(current), end_(end) {
CHECK_LE(start, current);
CHECK_LE(current, end);
}
constexpr CheckedContiguousIterator(const CheckedContiguousIterator& other) =
default;
// Converting constructor allowing conversions like CCI<T> to CCI<const T>,
// but disallowing CCI<const T> to CCI<T> or CCI<Derived> to CCI<Base>, which
// are unsafe. Furthermore, this is the same condition as used by the
// converting constructors of std::span<T> and std::unique_ptr<T[]>.
// See https://wg21.link/n4042 for details.
template <
typename U,
std::enable_if_t<std::is_convertible<U (*)[], T (*)[]>::value>* = nullptr>
constexpr CheckedContiguousIterator(const CheckedContiguousIterator<U>& other)
: start_(other.start_), current_(other.current_), end_(other.end_) {
// We explicitly don't delegate to the 3-argument constructor here. Its
// CHECKs would be redundant, since we expect |other| to maintain its own
// invariant. However, DCHECKs never hurt anybody. Presumably.
DCHECK_LE(other.start_, other.current_);
DCHECK_LE(other.current_, other.end_);
}
~CheckedContiguousIterator() = default;
constexpr CheckedContiguousIterator& operator=(
const CheckedContiguousIterator& other) = default;
friend constexpr bool operator==(const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ == rhs.current_;
}
friend constexpr bool operator!=(const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ != rhs.current_;
}
friend constexpr bool operator<(const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ < rhs.current_;
}
friend constexpr bool operator<=(const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ <= rhs.current_;
}
friend constexpr bool operator>(const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ > rhs.current_;
}
friend constexpr bool operator>=(const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ >= rhs.current_;
}
constexpr CheckedContiguousIterator& operator++() {
CHECK_NE(current_, end_);
++current_;
return *this;
}
constexpr CheckedContiguousIterator operator++(int) {
CheckedContiguousIterator old = *this;
++*this;
return old;
}
constexpr CheckedContiguousIterator& operator--() {
CHECK_NE(current_, start_);
--current_;
return *this;
}
constexpr CheckedContiguousIterator operator--(int) {
CheckedContiguousIterator old = *this;
--*this;
return old;
}
constexpr CheckedContiguousIterator& operator+=(difference_type rhs) {
if (rhs > 0) {
CHECK_LE(rhs, end_ - current_);
} else {
CHECK_LE(-rhs, current_ - start_);
}
current_ += rhs;
return *this;
}
constexpr CheckedContiguousIterator operator+(difference_type rhs) const {
CheckedContiguousIterator it = *this;
it += rhs;
return it;
}
constexpr CheckedContiguousIterator& operator-=(difference_type rhs) {
if (rhs < 0) {
CHECK_LE(-rhs, end_ - current_);
} else {
CHECK_LE(rhs, current_ - start_);
}
current_ -= rhs;
return *this;
}
constexpr CheckedContiguousIterator operator-(difference_type rhs) const {
CheckedContiguousIterator it = *this;
it -= rhs;
return it;
}
constexpr friend difference_type operator-(
const CheckedContiguousIterator& lhs,
const CheckedContiguousIterator& rhs) {
lhs.CheckComparable(rhs);
return lhs.current_ - rhs.current_;
}
constexpr reference operator*() const {
CHECK_NE(current_, end_);
return *current_;
}
constexpr pointer operator->() const {
CHECK_NE(current_, end_);
return current_;
}
constexpr reference operator[](difference_type rhs) const {
CHECK_GE(rhs, 0);
CHECK_LT(rhs, end_ - current_);
return current_[rhs];
}
static bool IsRangeMoveSafe(const CheckedContiguousIterator& from_begin,
const CheckedContiguousIterator& from_end,
const CheckedContiguousIterator& to)
WARN_UNUSED_RESULT {
if (from_end < from_begin)
return false;
const auto from_begin_uintptr = get_uintptr(from_begin.current_);
const auto from_end_uintptr = get_uintptr(from_end.current_);
const auto to_begin_uintptr = get_uintptr(to.current_);
const auto to_end_uintptr =
get_uintptr((to + std::distance(from_begin, from_end)).current_);
return to_begin_uintptr >= from_end_uintptr ||
to_end_uintptr <= from_begin_uintptr;
}
private:
constexpr void CheckComparable(const CheckedContiguousIterator& other) const {
CHECK_EQ(start_, other.start_);
CHECK_EQ(end_, other.end_);
}
const T* start_ = nullptr;
T* current_ = nullptr;
const T* end_ = nullptr;
};
template <typename T>
using CheckedContiguousConstIterator = CheckedContiguousIterator<const T>;
} // namespace base
#endif // BASE_CONTAINERS_CHECKED_ITERATORS_H_

View file

@ -0,0 +1,173 @@
// Copyright 2019 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_CHECKED_RANGE_H_
#define BASE_CONTAINERS_CHECKED_RANGE_H_
#include <stddef.h>
#include <iterator>
#include <type_traits>
#include "base/containers/checked_iterators.h"
#include "base/stl_util.h"
namespace base {
// CheckedContiguousRange is a light-weight wrapper around a container modeling
// the ContiguousContainer requirement [1, 2]. Effectively this means that the
// container stores its elements contiguous in memory. Furthermore, it is
// expected that base::data(container) and base::size(container) are valid
// expressions, and that data() + idx is dereferenceable for all idx in the
// range [0, size()). In the standard library this includes the containers
// std::string, std::vector and std::array, but other containers like
// std::initializer_list and C arrays are supported as well.
//
// In general this class is in nature quite similar to base::span, and its API
// is inspired by it. Similarly to base::span (and other view-like containers
// such as base::StringPiece) callers are encouraged to pass checked ranges by
// value.
//
// However, one important difference is that this class stores a pointer to the
// underlying container (as opposed to just storing its data() and size()), and
// thus is able to deal with changes to the container, such as removing or
// adding elements.
//
// Note however that this class still does not extend the life-time of the
// underlying container, and thus callers need to make sure that the container
// outlives the view to avoid dangling pointers and references.
//
// Lastly, this class leverages base::CheckedContiguousIterator to perform
// bounds CHECKs, causing program termination when e.g. dereferencing the end
// iterator.
//
// [1] https://en.cppreference.com/w/cpp/named_req/ContiguousContainer
// [2]
// https://eel.is/c++draft/container.requirements.general#def:contiguous_container
template <typename ContiguousContainer>
class CheckedContiguousRange {
public:
using element_type = std::remove_pointer_t<decltype(
base::data(std::declval<ContiguousContainer&>()))>;
using value_type = std::remove_cv_t<element_type>;
using reference = element_type&;
using const_reference = const element_type&;
using pointer = element_type*;
using const_pointer = const element_type*;
using iterator = CheckedContiguousIterator<element_type>;
using const_iterator = CheckedContiguousConstIterator<element_type>;
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
using difference_type = typename iterator::difference_type;
using size_type = size_t;
static_assert(!std::is_reference<ContiguousContainer>::value,
"Error: ContiguousContainer can not be a reference.");
// Required for converting constructor below.
template <typename Container>
friend class CheckedContiguousRange;
// Default constructor. Behaves as if the underlying container was empty.
constexpr CheckedContiguousRange() noexcept = default;
// Templated constructor restricted to possibly cvref qualified versions of
// ContiguousContainer. This makes sure it does not shadow the auto generated
// copy and move constructors.
template <int&... ExplicitArgumentBarrier,
typename Container,
typename = std::enable_if_t<std::is_same<
std::remove_cv_t<std::remove_reference_t<ContiguousContainer>>,
std::remove_cv_t<std::remove_reference_t<Container>>>::value>>
constexpr CheckedContiguousRange(Container&& container) noexcept
: container_(&container) {}
// Converting constructor allowing conversions like CCR<C> to CCR<const C>,
// but disallowing CCR<const C> to CCR<C> or CCR<Derived[]> to CCR<Base[]>,
// which are unsafe. Furthermore, this is the same condition as used by the
// converting constructors of std::span<T> and std::unique_ptr<T[]>.
// See https://wg21.link/n4042 for details.
template <int&... ExplicitArgumentBarrier,
typename Container,
typename = std::enable_if_t<std::is_convertible<
typename CheckedContiguousRange<Container>::element_type (*)[],
element_type (*)[]>::value>>
constexpr CheckedContiguousRange(
CheckedContiguousRange<Container> range) noexcept
: container_(range.container_) {}
constexpr iterator begin() const noexcept {
return iterator(data(), data(), data() + size());
}
constexpr iterator end() const noexcept {
return iterator(data(), data() + size(), data() + size());
}
constexpr const_iterator cbegin() const noexcept { return begin(); }
constexpr const_iterator cend() const noexcept { return end(); }
constexpr reverse_iterator rbegin() const noexcept {
return reverse_iterator(end());
}
constexpr reverse_iterator rend() const noexcept {
return reverse_iterator(begin());
}
constexpr const_reverse_iterator crbegin() const noexcept { return rbegin(); }
constexpr const_reverse_iterator crend() const noexcept { return rend(); }
constexpr reference front() const noexcept { return *begin(); }
constexpr reference back() const noexcept { return *(end() - 1); }
constexpr reference operator[](size_type idx) const noexcept {
return *(begin() + idx);
}
constexpr pointer data() const noexcept {
return container_ ? base::data(*container_) : nullptr;
}
constexpr const_pointer cdata() const noexcept { return data(); }
constexpr size_type size() const noexcept {
return container_ ? base::size(*container_) : 0;
}
constexpr bool empty() const noexcept {
return container_ ? base::empty(*container_) : true;
}
private:
ContiguousContainer* container_ = nullptr;
};
// Utility functions helping to create const ranges and performing automatic
// type deduction.
template <typename ContiguousContainer>
using CheckedContiguousConstRange =
CheckedContiguousRange<const ContiguousContainer>;
template <int&... ExplicitArgumentBarrier, typename ContiguousContainer>
constexpr auto MakeCheckedContiguousRange(
ContiguousContainer&& container) noexcept {
return CheckedContiguousRange<std::remove_reference_t<ContiguousContainer>>(
std::forward<ContiguousContainer>(container));
}
template <int&... ExplicitArgumentBarrier, typename ContiguousContainer>
constexpr auto MakeCheckedContiguousConstRange(
ContiguousContainer&& container) noexcept {
return CheckedContiguousConstRange<
std::remove_reference_t<ContiguousContainer>>(
std::forward<ContiguousContainer>(container));
}
} // namespace base
#endif // BASE_CONTAINERS_CHECKED_RANGE_H_

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,393 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_FLAT_MAP_H_
#define BASE_CONTAINERS_FLAT_MAP_H_
#include <functional>
#include <tuple>
#include <utility>
#include "base/containers/flat_tree.h"
#include "base/logging.h"
#include "base/template_util.h"
namespace base {
namespace internal {
// An implementation of the flat_tree GetKeyFromValue template parameter that
// extracts the key as the first element of a pair.
template <class Key, class Mapped>
struct GetKeyFromValuePairFirst {
const Key& operator()(const std::pair<Key, Mapped>& p) const {
return p.first;
}
};
} // namespace internal
// flat_map is a container with a std::map-like interface that stores its
// contents in a sorted vector.
//
// Please see //base/containers/README.md for an overview of which container
// to select.
//
// PROS
//
// - Good memory locality.
// - Low overhead, especially for smaller maps.
// - Performance is good for more workloads than you might expect (see
// overview link above).
// - Supports C++14 map interface.
//
// CONS
//
// - Inserts and removals are O(n).
//
// IMPORTANT NOTES
//
// - Iterators are invalidated across mutations.
// - If possible, construct a flat_map in one operation by inserting into
// a std::vector and moving that vector into the flat_map constructor.
//
// QUICK REFERENCE
//
// Most of the core functionality is inherited from flat_tree. Please see
// flat_tree.h for more details for most of these functions. As a quick
// reference, the functions available are:
//
// Constructors (inputs need not be sorted):
// flat_map(InputIterator first, InputIterator last,
// const Compare& compare = Compare());
// flat_map(const flat_map&);
// flat_map(flat_map&&);
// flat_map(const std::vector<value_type>& items,
// const Compare& compare = Compare());
// flat_map(std::vector<value_type>&& items,
// const Compare& compare = Compare()); // Re-use storage.
// flat_map(std::initializer_list<value_type> ilist,
// const Compare& comp = Compare());
//
// Assignment functions:
// flat_map& operator=(const flat_map&);
// flat_map& operator=(flat_map&&);
// flat_map& operator=(initializer_list<value_type>);
//
// Memory management functions:
// void reserve(size_t);
// size_t capacity() const;
// void shrink_to_fit();
//
// Size management functions:
// void clear();
// size_t size() const;
// size_t max_size() const;
// bool empty() const;
//
// Iterator functions:
// iterator begin();
// const_iterator begin() const;
// const_iterator cbegin() const;
// iterator end();
// const_iterator end() const;
// const_iterator cend() const;
// reverse_iterator rbegin();
// const reverse_iterator rbegin() const;
// const_reverse_iterator crbegin() const;
// reverse_iterator rend();
// const_reverse_iterator rend() const;
// const_reverse_iterator crend() const;
//
// Insert and accessor functions:
// mapped_type& operator[](const key_type&);
// mapped_type& operator[](key_type&&);
// mapped_type& at(const K&);
// const mapped_type& at(const K&) const;
// pair<iterator, bool> insert(const value_type&);
// pair<iterator, bool> insert(value_type&&);
// iterator insert(const_iterator hint, const value_type&);
// iterator insert(const_iterator hint, value_type&&);
// void insert(InputIterator first, InputIterator last);
// pair<iterator, bool> insert_or_assign(K&&, M&&);
// iterator insert_or_assign(const_iterator hint, K&&, M&&);
// pair<iterator, bool> emplace(Args&&...);
// iterator emplace_hint(const_iterator, Args&&...);
// pair<iterator, bool> try_emplace(K&&, Args&&...);
// iterator try_emplace(const_iterator hint, K&&, Args&&...);
// Underlying type functions:
// underlying_type extract() &&;
// void replace(underlying_type&&);
//
// Erase functions:
// iterator erase(iterator);
// iterator erase(const_iterator);
// iterator erase(const_iterator first, const_iterator& last);
// template <class K> size_t erase(const K& key);
//
// Comparators (see std::map documentation).
// key_compare key_comp() const;
// value_compare value_comp() const;
//
// Search functions:
// template <typename K> size_t count(const K&) const;
// template <typename K> iterator find(const K&);
// template <typename K> const_iterator find(const K&) const;
// template <typename K> bool contains(const K&) const;
// template <typename K> pair<iterator, iterator> equal_range(const K&);
// template <typename K> iterator lower_bound(const K&);
// template <typename K> const_iterator lower_bound(const K&) const;
// template <typename K> iterator upper_bound(const K&);
// template <typename K> const_iterator upper_bound(const K&) const;
//
// General functions:
// void swap(flat_map&&);
//
// Non-member operators:
// bool operator==(const flat_map&, const flat_map);
// bool operator!=(const flat_map&, const flat_map);
// bool operator<(const flat_map&, const flat_map);
// bool operator>(const flat_map&, const flat_map);
// bool operator>=(const flat_map&, const flat_map);
// bool operator<=(const flat_map&, const flat_map);
//
template <class Key, class Mapped, class Compare = std::less<>>
class flat_map : public ::base::internal::flat_tree<
Key,
std::pair<Key, Mapped>,
::base::internal::GetKeyFromValuePairFirst<Key, Mapped>,
Compare> {
private:
using tree = typename ::base::internal::flat_tree<
Key,
std::pair<Key, Mapped>,
::base::internal::GetKeyFromValuePairFirst<Key, Mapped>,
Compare>;
using underlying_type = typename tree::underlying_type;
public:
using key_type = typename tree::key_type;
using mapped_type = Mapped;
using value_type = typename tree::value_type;
using iterator = typename tree::iterator;
using const_iterator = typename tree::const_iterator;
// --------------------------------------------------------------------------
// Lifetime and assignments.
//
// Note: we could do away with these constructors, destructor and assignment
// operator overloads by inheriting |tree|'s, but this breaks the GCC build
// due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84782 (see
// https://crbug.com/837221).
flat_map() = default;
explicit flat_map(const Compare& comp);
template <class InputIterator>
flat_map(InputIterator first,
InputIterator last,
const Compare& comp = Compare());
flat_map(const flat_map&) = default;
flat_map(flat_map&&) noexcept = default;
flat_map(const underlying_type& items, const Compare& comp = Compare());
flat_map(underlying_type&& items, const Compare& comp = Compare());
flat_map(std::initializer_list<value_type> ilist,
const Compare& comp = Compare());
~flat_map() = default;
flat_map& operator=(const flat_map&) = default;
flat_map& operator=(flat_map&&) = default;
// Takes the first if there are duplicates in the initializer list.
flat_map& operator=(std::initializer_list<value_type> ilist);
// Out-of-bound calls to at() will CHECK.
template <class K>
mapped_type& at(const K& key);
template <class K>
const mapped_type& at(const K& key) const;
// --------------------------------------------------------------------------
// Map-specific insert operations.
//
// Normal insert() functions are inherited from flat_tree.
//
// Assume that every operation invalidates iterators and references.
// Insertion of one element can take O(size).
mapped_type& operator[](const key_type& key);
mapped_type& operator[](key_type&& key);
template <class K, class M>
std::pair<iterator, bool> insert_or_assign(K&& key, M&& obj);
template <class K, class M>
iterator insert_or_assign(const_iterator hint, K&& key, M&& obj);
template <class K, class... Args>
std::enable_if_t<std::is_constructible<key_type, K&&>::value,
std::pair<iterator, bool>>
try_emplace(K&& key, Args&&... args);
template <class K, class... Args>
std::enable_if_t<std::is_constructible<key_type, K&&>::value, iterator>
try_emplace(const_iterator hint, K&& key, Args&&... args);
// --------------------------------------------------------------------------
// General operations.
//
// Assume that swap invalidates iterators and references.
void swap(flat_map& other) noexcept;
friend void swap(flat_map& lhs, flat_map& rhs) noexcept { lhs.swap(rhs); }
};
// ----------------------------------------------------------------------------
// Lifetime.
template <class Key, class Mapped, class Compare>
flat_map<Key, Mapped, Compare>::flat_map(const Compare& comp) : tree(comp) {}
template <class Key, class Mapped, class Compare>
template <class InputIterator>
flat_map<Key, Mapped, Compare>::flat_map(InputIterator first,
InputIterator last,
const Compare& comp)
: tree(first, last, comp) {}
template <class Key, class Mapped, class Compare>
flat_map<Key, Mapped, Compare>::flat_map(const underlying_type& items,
const Compare& comp)
: tree(items, comp) {}
template <class Key, class Mapped, class Compare>
flat_map<Key, Mapped, Compare>::flat_map(underlying_type&& items,
const Compare& comp)
: tree(std::move(items), comp) {}
template <class Key, class Mapped, class Compare>
flat_map<Key, Mapped, Compare>::flat_map(
std::initializer_list<value_type> ilist,
const Compare& comp)
: flat_map(std::begin(ilist), std::end(ilist), comp) {}
// ----------------------------------------------------------------------------
// Assignments.
template <class Key, class Mapped, class Compare>
auto flat_map<Key, Mapped, Compare>::operator=(
std::initializer_list<value_type> ilist) -> flat_map& {
// When https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84782 gets fixed, we
// need to remember to inherit tree::operator= to prevent
// flat_map<...> x;
// x = {...};
// from first creating a flat_map and then move assigning it. This most
// likely would be optimized away but still affects our debug builds.
tree::operator=(ilist);
return *this;
}
// ----------------------------------------------------------------------------
// Lookups.
template <class Key, class Mapped, class Compare>
template <class K>
auto flat_map<Key, Mapped, Compare>::at(const K& key) -> mapped_type& {
iterator found = tree::find(key);
CHECK(found != tree::end());
return found->second;
}
template <class Key, class Mapped, class Compare>
template <class K>
auto flat_map<Key, Mapped, Compare>::at(const K& key) const
-> const mapped_type& {
const_iterator found = tree::find(key);
CHECK(found != tree::cend());
return found->second;
}
// ----------------------------------------------------------------------------
// Insert operations.
template <class Key, class Mapped, class Compare>
auto flat_map<Key, Mapped, Compare>::operator[](const key_type& key)
-> mapped_type& {
iterator found = tree::lower_bound(key);
if (found == tree::end() || tree::key_comp()(key, found->first))
found = tree::unsafe_emplace(found, key, mapped_type());
return found->second;
}
template <class Key, class Mapped, class Compare>
auto flat_map<Key, Mapped, Compare>::operator[](key_type&& key)
-> mapped_type& {
iterator found = tree::lower_bound(key);
if (found == tree::end() || tree::key_comp()(key, found->first))
found = tree::unsafe_emplace(found, std::move(key), mapped_type());
return found->second;
}
template <class Key, class Mapped, class Compare>
template <class K, class M>
auto flat_map<Key, Mapped, Compare>::insert_or_assign(K&& key, M&& obj)
-> std::pair<iterator, bool> {
auto result =
tree::emplace_key_args(key, std::forward<K>(key), std::forward<M>(obj));
if (!result.second)
result.first->second = std::forward<M>(obj);
return result;
}
template <class Key, class Mapped, class Compare>
template <class K, class M>
auto flat_map<Key, Mapped, Compare>::insert_or_assign(const_iterator hint,
K&& key,
M&& obj) -> iterator {
auto result = tree::emplace_hint_key_args(hint, key, std::forward<K>(key),
std::forward<M>(obj));
if (!result.second)
result.first->second = std::forward<M>(obj);
return result.first;
}
template <class Key, class Mapped, class Compare>
template <class K, class... Args>
auto flat_map<Key, Mapped, Compare>::try_emplace(K&& key, Args&&... args)
-> std::enable_if_t<std::is_constructible<key_type, K&&>::value,
std::pair<iterator, bool>> {
return tree::emplace_key_args(
key, std::piecewise_construct,
std::forward_as_tuple(std::forward<K>(key)),
std::forward_as_tuple(std::forward<Args>(args)...));
}
template <class Key, class Mapped, class Compare>
template <class K, class... Args>
auto flat_map<Key, Mapped, Compare>::try_emplace(const_iterator hint,
K&& key,
Args&&... args)
-> std::enable_if_t<std::is_constructible<key_type, K&&>::value, iterator> {
return tree::emplace_hint_key_args(
hint, key, std::piecewise_construct,
std::forward_as_tuple(std::forward<K>(key)),
std::forward_as_tuple(std::forward<Args>(args)...))
.first;
}
// ----------------------------------------------------------------------------
// General operations.
template <class Key, class Mapped, class Compare>
void flat_map<Key, Mapped, Compare>::swap(flat_map& other) noexcept {
tree::swap(other);
}
} // namespace base
#endif // BASE_CONTAINERS_FLAT_MAP_H_

View file

@ -0,0 +1,143 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_FLAT_SET_H_
#define BASE_CONTAINERS_FLAT_SET_H_
#include <functional>
#include "base/containers/flat_tree.h"
#include "base/template_util.h"
namespace base {
// flat_set is a container with a std::set-like interface that stores its
// contents in a sorted vector.
//
// Please see //base/containers/README.md for an overview of which container
// to select.
//
// PROS
//
// - Good memory locality.
// - Low overhead, especially for smaller sets.
// - Performance is good for more workloads than you might expect (see
// overview link above).
// - Supports C++14 set interface.
//
// CONS
//
// - Inserts and removals are O(n).
//
// IMPORTANT NOTES
//
// - Iterators are invalidated across mutations.
// - If possible, construct a flat_set in one operation by inserting into
// a std::vector and moving that vector into the flat_set constructor.
// - For multiple removals use base::EraseIf() which is O(n) rather than
// O(n * removed_items).
//
// QUICK REFERENCE
//
// Most of the core functionality is inherited from flat_tree. Please see
// flat_tree.h for more details for most of these functions. As a quick
// reference, the functions available are:
//
// Constructors (inputs need not be sorted):
// flat_set(InputIterator first, InputIterator last,
// const Compare& compare = Compare());
// flat_set(const flat_set&);
// flat_set(flat_set&&);
// flat_set(const std::vector<Key>& items,
// const Compare& compare = Compare());
// flat_set(std::vector<Key>&& items,
// const Compare& compare = Compare()); // Re-use storage.
// flat_set(std::initializer_list<value_type> ilist,
// const Compare& comp = Compare());
//
// Assignment functions:
// flat_set& operator=(const flat_set&);
// flat_set& operator=(flat_set&&);
// flat_set& operator=(initializer_list<Key>);
//
// Memory management functions:
// void reserve(size_t);
// size_t capacity() const;
// void shrink_to_fit();
//
// Size management functions:
// void clear();
// size_t size() const;
// size_t max_size() const;
// bool empty() const;
//
// Iterator functions:
// iterator begin();
// const_iterator begin() const;
// const_iterator cbegin() const;
// iterator end();
// const_iterator end() const;
// const_iterator cend() const;
// reverse_iterator rbegin();
// const reverse_iterator rbegin() const;
// const_reverse_iterator crbegin() const;
// reverse_iterator rend();
// const_reverse_iterator rend() const;
// const_reverse_iterator crend() const;
//
// Insert and accessor functions:
// pair<iterator, bool> insert(const key_type&);
// pair<iterator, bool> insert(key_type&&);
// void insert(InputIterator first, InputIterator last);
// iterator insert(const_iterator hint, const key_type&);
// iterator insert(const_iterator hint, key_type&&);
// pair<iterator, bool> emplace(Args&&...);
// iterator emplace_hint(const_iterator, Args&&...);
//
// Underlying type functions:
// underlying_type extract() &&;
// void replace(underlying_type&&);
//
// Erase functions:
// iterator erase(iterator);
// iterator erase(const_iterator);
// iterator erase(const_iterator first, const_iterator& last);
// template <typename K> size_t erase(const K& key);
//
// Comparators (see std::set documentation).
// key_compare key_comp() const;
// value_compare value_comp() const;
//
// Search functions:
// template <typename K> size_t count(const K&) const;
// template <typename K> iterator find(const K&);
// template <typename K> const_iterator find(const K&) const;
// template <typename K> bool contains(const K&) const;
// template <typename K> pair<iterator, iterator> equal_range(K&);
// template <typename K> iterator lower_bound(const K&);
// template <typename K> const_iterator lower_bound(const K&) const;
// template <typename K> iterator upper_bound(const K&);
// template <typename K> const_iterator upper_bound(const K&) const;
//
// General functions:
// void swap(flat_set&&);
//
// Non-member operators:
// bool operator==(const flat_set&, const flat_set);
// bool operator!=(const flat_set&, const flat_set);
// bool operator<(const flat_set&, const flat_set);
// bool operator>(const flat_set&, const flat_set);
// bool operator>=(const flat_set&, const flat_set);
// bool operator<=(const flat_set&, const flat_set);
//
template <class Key, class Compare = std::less<>>
using flat_set = typename ::base::internal::flat_tree<
Key,
Key,
::base::internal::GetKeyFromValueIdentity<Key>,
Compare>;
} // namespace base
#endif // BASE_CONTAINERS_FLAT_SET_H_

View file

@ -0,0 +1,986 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_FLAT_TREE_H_
#define BASE_CONTAINERS_FLAT_TREE_H_
#include <algorithm>
#include <iterator>
#include <type_traits>
#include <utility>
#include <vector>
#include "base/stl_util.h"
#include "base/template_util.h"
namespace base {
namespace internal {
// This is a convenience method returning true if Iterator is at least a
// ForwardIterator and thus supports multiple passes over a range.
template <class Iterator>
constexpr bool is_multipass() {
return std::is_base_of<
std::forward_iterator_tag,
typename std::iterator_traits<Iterator>::iterator_category>::value;
}
// Uses SFINAE to detect whether type has is_transparent member.
template <typename T, typename = void>
struct IsTransparentCompare : std::false_type {};
template <typename T>
struct IsTransparentCompare<T, void_t<typename T::is_transparent>>
: std::true_type {};
// Implementation -------------------------------------------------------------
// Implementation of a sorted vector for backing flat_set and flat_map. Do not
// use directly.
//
// The use of "value" in this is like std::map uses, meaning it's the thing
// contained (in the case of map it's a <Kay, Mapped> pair). The Key is how
// things are looked up. In the case of a set, Key == Value. In the case of
// a map, the Key is a component of a Value.
//
// The helper class GetKeyFromValue provides the means to extract a key from a
// value for comparison purposes. It should implement:
// const Key& operator()(const Value&).
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
class flat_tree {
protected:
using underlying_type = std::vector<Value>;
public:
// --------------------------------------------------------------------------
// Types.
//
using key_type = Key;
using key_compare = KeyCompare;
using value_type = Value;
// Wraps the templated key comparison to compare values.
class value_compare : public key_compare {
public:
value_compare() = default;
template <class Cmp>
explicit value_compare(Cmp&& compare_arg)
: KeyCompare(std::forward<Cmp>(compare_arg)) {}
bool operator()(const value_type& left, const value_type& right) const {
GetKeyFromValue extractor;
return key_compare::operator()(extractor(left), extractor(right));
}
};
using pointer = typename underlying_type::pointer;
using const_pointer = typename underlying_type::const_pointer;
using reference = typename underlying_type::reference;
using const_reference = typename underlying_type::const_reference;
using size_type = typename underlying_type::size_type;
using difference_type = typename underlying_type::difference_type;
using iterator = typename underlying_type::iterator;
using const_iterator = typename underlying_type::const_iterator;
using reverse_iterator = typename underlying_type::reverse_iterator;
using const_reverse_iterator =
typename underlying_type::const_reverse_iterator;
// --------------------------------------------------------------------------
// Lifetime.
//
// Constructors that take range guarantee O(N * log^2(N)) + O(N) complexity
// and take O(N * log(N)) + O(N) if extra memory is available (N is a range
// length).
//
// Assume that move constructors invalidate iterators and references.
//
// The constructors that take ranges, lists, and vectors do not require that
// the input be sorted.
flat_tree();
explicit flat_tree(const key_compare& comp);
template <class InputIterator>
flat_tree(InputIterator first,
InputIterator last,
const key_compare& comp = key_compare());
flat_tree(const flat_tree&);
flat_tree(flat_tree&&) noexcept = default;
flat_tree(const underlying_type& items,
const key_compare& comp = key_compare());
flat_tree(underlying_type&& items, const key_compare& comp = key_compare());
flat_tree(std::initializer_list<value_type> ilist,
const key_compare& comp = key_compare());
~flat_tree();
// --------------------------------------------------------------------------
// Assignments.
//
// Assume that move assignment invalidates iterators and references.
flat_tree& operator=(const flat_tree&);
flat_tree& operator=(flat_tree&&);
// Takes the first if there are duplicates in the initializer list.
flat_tree& operator=(std::initializer_list<value_type> ilist);
// --------------------------------------------------------------------------
// Memory management.
//
// Beware that shrink_to_fit() simply forwards the request to the
// underlying_type and its implementation is free to optimize otherwise and
// leave capacity() to be greater that its size.
//
// reserve() and shrink_to_fit() invalidate iterators and references.
void reserve(size_type new_capacity);
size_type capacity() const;
void shrink_to_fit();
// --------------------------------------------------------------------------
// Size management.
//
// clear() leaves the capacity() of the flat_tree unchanged.
void clear();
size_type size() const;
size_type max_size() const;
bool empty() const;
// --------------------------------------------------------------------------
// Iterators.
iterator begin();
const_iterator begin() const;
const_iterator cbegin() const;
iterator end();
const_iterator end() const;
const_iterator cend() const;
reverse_iterator rbegin();
const_reverse_iterator rbegin() const;
const_reverse_iterator crbegin() const;
reverse_iterator rend();
const_reverse_iterator rend() const;
const_reverse_iterator crend() const;
// --------------------------------------------------------------------------
// Insert operations.
//
// Assume that every operation invalidates iterators and references.
// Insertion of one element can take O(size). Capacity of flat_tree grows in
// an implementation-defined manner.
//
// NOTE: Prefer to build a new flat_tree from a std::vector (or similar)
// instead of calling insert() repeatedly.
std::pair<iterator, bool> insert(const value_type& val);
std::pair<iterator, bool> insert(value_type&& val);
iterator insert(const_iterator position_hint, const value_type& x);
iterator insert(const_iterator position_hint, value_type&& x);
// This method inserts the values from the range [first, last) into the
// current tree.
template <class InputIterator>
void insert(InputIterator first, InputIterator last);
template <class... Args>
std::pair<iterator, bool> emplace(Args&&... args);
template <class... Args>
iterator emplace_hint(const_iterator position_hint, Args&&... args);
// --------------------------------------------------------------------------
// Underlying type operations.
//
// Assume that either operation invalidates iterators and references.
// Extracts the underlying_type and returns it to the caller. Ensures that
// `this` is `empty()` afterwards.
underlying_type extract() &&;
// Replaces the underlying_type with `body`. Expects that `body` is sorted
// and has no repeated elements with regard to value_comp().
void replace(underlying_type&& body);
// --------------------------------------------------------------------------
// Erase operations.
//
// Assume that every operation invalidates iterators and references.
//
// erase(position), erase(first, last) can take O(size).
// erase(key) may take O(size) + O(log(size)).
//
// Prefer base::EraseIf() or some other variation on erase(remove(), end())
// idiom when deleting multiple non-consecutive elements.
iterator erase(iterator position);
iterator erase(const_iterator position);
iterator erase(const_iterator first, const_iterator last);
template <typename K>
size_type erase(const K& key);
// --------------------------------------------------------------------------
// Comparators.
key_compare key_comp() const;
value_compare value_comp() const;
// --------------------------------------------------------------------------
// Search operations.
//
// Search operations have O(log(size)) complexity.
template <typename K>
size_type count(const K& key) const;
template <typename K>
iterator find(const K& key);
template <typename K>
const_iterator find(const K& key) const;
template <typename K>
bool contains(const K& key) const;
template <typename K>
std::pair<iterator, iterator> equal_range(const K& key);
template <typename K>
std::pair<const_iterator, const_iterator> equal_range(const K& key) const;
template <typename K>
iterator lower_bound(const K& key);
template <typename K>
const_iterator lower_bound(const K& key) const;
template <typename K>
iterator upper_bound(const K& key);
template <typename K>
const_iterator upper_bound(const K& key) const;
// --------------------------------------------------------------------------
// General operations.
//
// Assume that swap invalidates iterators and references.
//
// Implementation note: currently we use operator==() and operator<() on
// std::vector, because they have the same contract we need, so we use them
// directly for brevity and in case it is more optimal than calling equal()
// and lexicograhpical_compare(). If the underlying container type is changed,
// this code may need to be modified.
void swap(flat_tree& other) noexcept;
friend bool operator==(const flat_tree& lhs, const flat_tree& rhs) {
return lhs.impl_.body_ == rhs.impl_.body_;
}
friend bool operator!=(const flat_tree& lhs, const flat_tree& rhs) {
return !(lhs == rhs);
}
friend bool operator<(const flat_tree& lhs, const flat_tree& rhs) {
return lhs.impl_.body_ < rhs.impl_.body_;
}
friend bool operator>(const flat_tree& lhs, const flat_tree& rhs) {
return rhs < lhs;
}
friend bool operator>=(const flat_tree& lhs, const flat_tree& rhs) {
return !(lhs < rhs);
}
friend bool operator<=(const flat_tree& lhs, const flat_tree& rhs) {
return !(lhs > rhs);
}
friend void swap(flat_tree& lhs, flat_tree& rhs) noexcept { lhs.swap(rhs); }
protected:
// Emplaces a new item into the tree that is known not to be in it. This
// is for implementing map operator[].
template <class... Args>
iterator unsafe_emplace(const_iterator position, Args&&... args);
// Attempts to emplace a new element with key |key|. Only if |key| is not yet
// present, construct value_type from |args| and insert it. Returns an
// iterator to the element with key |key| and a bool indicating whether an
// insertion happened.
template <class K, class... Args>
std::pair<iterator, bool> emplace_key_args(const K& key, Args&&... args);
// Similar to |emplace_key_args|, but checks |hint| first as a possible
// insertion position.
template <class K, class... Args>
std::pair<iterator, bool> emplace_hint_key_args(const_iterator hint,
const K& key,
Args&&... args);
private:
// Helper class for e.g. lower_bound that can compare a value on the left
// to a key on the right.
struct KeyValueCompare {
// The key comparison object must outlive this class.
explicit KeyValueCompare(const key_compare& key_comp)
: key_comp_(key_comp) {}
template <typename T, typename U>
bool operator()(const T& lhs, const U& rhs) const {
return key_comp_(extract_if_value_type(lhs), extract_if_value_type(rhs));
}
private:
const key_type& extract_if_value_type(const value_type& v) const {
GetKeyFromValue extractor;
return extractor(v);
}
template <typename K>
const K& extract_if_value_type(const K& k) const {
return k;
}
const key_compare& key_comp_;
};
iterator const_cast_it(const_iterator c_it) {
auto distance = std::distance(cbegin(), c_it);
return std::next(begin(), distance);
}
// This method is inspired by both std::map::insert(P&&) and
// std::map::insert_or_assign(const K&, V&&). It inserts val if an equivalent
// element is not present yet, otherwise it overwrites. It returns an iterator
// to the modified element and a flag indicating whether insertion or
// assignment happened.
template <class V>
std::pair<iterator, bool> insert_or_assign(V&& val) {
auto position = lower_bound(GetKeyFromValue()(val));
if (position == end() || value_comp()(val, *position))
return {impl_.body_.emplace(position, std::forward<V>(val)), true};
*position = std::forward<V>(val);
return {position, false};
}
// This method is similar to insert_or_assign, with the following differences:
// - Instead of searching [begin(), end()) it only searches [first, last).
// - In case no equivalent element is found, val is appended to the end of the
// underlying body and an iterator to the next bigger element in [first,
// last) is returned.
template <class V>
std::pair<iterator, bool> append_or_assign(iterator first,
iterator last,
V&& val) {
auto position = std::lower_bound(first, last, val, value_comp());
if (position == last || value_comp()(val, *position)) {
// emplace_back might invalidate position, which is why distance needs to
// be cached.
const difference_type distance = std::distance(begin(), position);
impl_.body_.emplace_back(std::forward<V>(val));
return {std::next(begin(), distance), true};
}
*position = std::forward<V>(val);
return {position, false};
}
// This method is similar to insert, with the following differences:
// - Instead of searching [begin(), end()) it only searches [first, last).
// - In case no equivalent element is found, val is appended to the end of the
// underlying body and an iterator to the next bigger element in [first,
// last) is returned.
template <class V>
std::pair<iterator, bool> append_unique(iterator first,
iterator last,
V&& val) {
auto position = std::lower_bound(first, last, val, value_comp());
if (position == last || value_comp()(val, *position)) {
// emplace_back might invalidate position, which is why distance needs to
// be cached.
const difference_type distance = std::distance(begin(), position);
impl_.body_.emplace_back(std::forward<V>(val));
return {std::next(begin(), distance), true};
}
return {position, false};
}
void sort_and_unique(iterator first, iterator last) {
// Preserve stability for the unique code below.
std::stable_sort(first, last, value_comp());
auto equal_comp = [this](const value_type& lhs, const value_type& rhs) {
// lhs is already <= rhs due to sort, therefore
// !(lhs < rhs) <=> lhs == rhs.
return !value_comp()(lhs, rhs);
};
erase(std::unique(first, last, equal_comp), last);
}
// To support comparators that may not be possible to default-construct, we
// have to store an instance of Compare. Using this to store all internal
// state of flat_tree and using private inheritance to store compare lets us
// take advantage of an empty base class optimization to avoid extra space in
// the common case when Compare has no state.
struct Impl : private value_compare {
Impl() = default;
template <class Cmp, class... Body>
explicit Impl(Cmp&& compare_arg, Body&&... underlying_type_args)
: value_compare(std::forward<Cmp>(compare_arg)),
body_(std::forward<Body>(underlying_type_args)...) {}
const value_compare& get_value_comp() const { return *this; }
const key_compare& get_key_comp() const { return *this; }
underlying_type body_;
} impl_;
// If the compare is not transparent we want to construct key_type once.
template <typename K>
using KeyTypeOrK = typename std::
conditional<IsTransparentCompare<key_compare>::value, K, key_type>::type;
};
// ----------------------------------------------------------------------------
// Lifetime.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree() = default;
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree(
const KeyCompare& comp)
: impl_(comp) {}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class InputIterator>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree(
InputIterator first,
InputIterator last,
const KeyCompare& comp)
: impl_(comp, first, last) {
sort_and_unique(begin(), end());
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree(
const flat_tree&) = default;
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree(
const underlying_type& items,
const KeyCompare& comp)
: impl_(comp, items) {
sort_and_unique(begin(), end());
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree(
underlying_type&& items,
const KeyCompare& comp)
: impl_(comp, std::move(items)) {
sort_and_unique(begin(), end());
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::flat_tree(
std::initializer_list<value_type> ilist,
const KeyCompare& comp)
: flat_tree(std::begin(ilist), std::end(ilist), comp) {}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::~flat_tree() = default;
// ----------------------------------------------------------------------------
// Assignments.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::operator=(
const flat_tree&) -> flat_tree& = default;
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::operator=(flat_tree &&)
-> flat_tree& = default;
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::operator=(
std::initializer_list<value_type> ilist) -> flat_tree& {
impl_.body_ = ilist;
sort_and_unique(begin(), end());
return *this;
}
// ----------------------------------------------------------------------------
// Memory management.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
void flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::reserve(
size_type new_capacity) {
impl_.body_.reserve(new_capacity);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::capacity() const
-> size_type {
return impl_.body_.capacity();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
void flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::shrink_to_fit() {
impl_.body_.shrink_to_fit();
}
// ----------------------------------------------------------------------------
// Size management.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
void flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::clear() {
impl_.body_.clear();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::size() const
-> size_type {
return impl_.body_.size();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::max_size() const
-> size_type {
return impl_.body_.max_size();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
bool flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::empty() const {
return impl_.body_.empty();
}
// ----------------------------------------------------------------------------
// Iterators.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::begin() -> iterator {
return impl_.body_.begin();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::begin() const
-> const_iterator {
return impl_.body_.begin();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::cbegin() const
-> const_iterator {
return impl_.body_.cbegin();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::end() -> iterator {
return impl_.body_.end();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::end() const
-> const_iterator {
return impl_.body_.end();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::cend() const
-> const_iterator {
return impl_.body_.cend();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::rbegin()
-> reverse_iterator {
return impl_.body_.rbegin();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::rbegin() const
-> const_reverse_iterator {
return impl_.body_.rbegin();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::crbegin() const
-> const_reverse_iterator {
return impl_.body_.crbegin();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::rend()
-> reverse_iterator {
return impl_.body_.rend();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::rend() const
-> const_reverse_iterator {
return impl_.body_.rend();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::crend() const
-> const_reverse_iterator {
return impl_.body_.crend();
}
// ----------------------------------------------------------------------------
// Insert operations.
//
// Currently we use position_hint the same way as eastl or boost:
// https://github.com/electronicarts/EASTL/blob/master/include/EASTL/vector_set.h#L493
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::insert(
const value_type& val) -> std::pair<iterator, bool> {
return emplace_key_args(GetKeyFromValue()(val), val);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::insert(
value_type&& val) -> std::pair<iterator, bool> {
return emplace_key_args(GetKeyFromValue()(val), std::move(val));
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::insert(
const_iterator position_hint,
const value_type& val) -> iterator {
return emplace_hint_key_args(position_hint, GetKeyFromValue()(val), val)
.first;
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::insert(
const_iterator position_hint,
value_type&& val) -> iterator {
return emplace_hint_key_args(position_hint, GetKeyFromValue()(val),
std::move(val))
.first;
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class InputIterator>
void flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::insert(
InputIterator first,
InputIterator last) {
if (first == last)
return;
// Dispatch to single element insert if the input range contains a single
// element.
if (is_multipass<InputIterator>() && std::next(first) == last) {
insert(end(), *first);
return;
}
// Provide a convenience lambda to obtain an iterator pointing past the last
// old element. This needs to be dymanic due to possible re-allocations.
auto middle = [this, size = size()] { return std::next(begin(), size); };
// For batch updates initialize the first insertion point.
difference_type pos_first_new = size();
// Loop over the input range while appending new values and overwriting
// existing ones, if applicable. Keep track of the first insertion point.
for (; first != last; ++first) {
std::pair<iterator, bool> result = append_unique(begin(), middle(), *first);
if (result.second) {
pos_first_new =
std::min(pos_first_new, std::distance(begin(), result.first));
}
}
// The new elements might be unordered and contain duplicates, so post-process
// the just inserted elements and merge them with the rest, inserting them at
// the previously found spot.
sort_and_unique(middle(), end());
std::inplace_merge(std::next(begin(), pos_first_new), middle(), end(),
value_comp());
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class... Args>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::emplace(Args&&... args)
-> std::pair<iterator, bool> {
return insert(value_type(std::forward<Args>(args)...));
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class... Args>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::emplace_hint(
const_iterator position_hint,
Args&&... args) -> iterator {
return insert(position_hint, value_type(std::forward<Args>(args)...));
}
// ----------------------------------------------------------------------------
// Underlying type operations.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::
extract() && -> underlying_type {
return std::exchange(impl_.body_, underlying_type());
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
void flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::replace(
underlying_type&& body) {
// Ensure that |body| is sorted and has no repeated elements.
DCHECK(std::is_sorted(body.begin(), body.end(), value_comp()));
DCHECK(std::adjacent_find(body.begin(), body.end(),
[this](const auto& lhs, const auto& rhs) {
return !value_comp()(lhs, rhs);
}) == body.end());
impl_.body_ = std::move(body);
}
// ----------------------------------------------------------------------------
// Erase operations.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::erase(
iterator position) -> iterator {
CHECK(position != impl_.body_.end());
return impl_.body_.erase(position);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::erase(
const_iterator position) -> iterator {
CHECK(position != impl_.body_.end());
return impl_.body_.erase(position);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::erase(const K& val)
-> size_type {
auto eq_range = equal_range(val);
auto res = std::distance(eq_range.first, eq_range.second);
erase(eq_range.first, eq_range.second);
return res;
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::erase(
const_iterator first,
const_iterator last) -> iterator {
return impl_.body_.erase(first, last);
}
// ----------------------------------------------------------------------------
// Comparators.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::key_comp() const
-> key_compare {
return impl_.get_key_comp();
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::value_comp() const
-> value_compare {
return impl_.get_value_comp();
}
// ----------------------------------------------------------------------------
// Search operations.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::count(
const K& key) const -> size_type {
auto eq_range = equal_range(key);
return std::distance(eq_range.first, eq_range.second);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::find(const K& key)
-> iterator {
return const_cast_it(base::as_const(*this).find(key));
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::find(
const K& key) const -> const_iterator {
auto eq_range = equal_range(key);
return (eq_range.first == eq_range.second) ? end() : eq_range.first;
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
bool flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::contains(
const K& key) const {
auto lower = lower_bound(key);
return lower != end() && !key_comp()(key, GetKeyFromValue()(*lower));
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::equal_range(
const K& key) -> std::pair<iterator, iterator> {
auto res = base::as_const(*this).equal_range(key);
return {const_cast_it(res.first), const_cast_it(res.second)};
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::equal_range(
const K& key) const -> std::pair<const_iterator, const_iterator> {
auto lower = lower_bound(key);
GetKeyFromValue extractor;
if (lower == end() || impl_.get_key_comp()(key, extractor(*lower)))
return {lower, lower};
return {lower, std::next(lower)};
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::lower_bound(
const K& key) -> iterator {
return const_cast_it(base::as_const(*this).lower_bound(key));
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::lower_bound(
const K& key) const -> const_iterator {
static_assert(std::is_convertible<const KeyTypeOrK<K>&, const K&>::value,
"Requested type cannot be bound to the container's key_type "
"which is required for a non-transparent compare.");
const KeyTypeOrK<K>& key_ref = key;
KeyValueCompare key_value(impl_.get_key_comp());
return std::lower_bound(begin(), end(), key_ref, key_value);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::upper_bound(
const K& key) -> iterator {
return const_cast_it(base::as_const(*this).upper_bound(key));
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <typename K>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::upper_bound(
const K& key) const -> const_iterator {
static_assert(std::is_convertible<const KeyTypeOrK<K>&, const K&>::value,
"Requested type cannot be bound to the container's key_type "
"which is required for a non-transparent compare.");
const KeyTypeOrK<K>& key_ref = key;
KeyValueCompare key_value(impl_.get_key_comp());
return std::upper_bound(begin(), end(), key_ref, key_value);
}
// ----------------------------------------------------------------------------
// General operations.
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
void flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::swap(
flat_tree& other) noexcept {
std::swap(impl_, other.impl_);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class... Args>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::unsafe_emplace(
const_iterator position,
Args&&... args) -> iterator {
return impl_.body_.emplace(position, std::forward<Args>(args)...);
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class K, class... Args>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::emplace_key_args(
const K& key,
Args&&... args) -> std::pair<iterator, bool> {
auto lower = lower_bound(key);
if (lower == end() || key_comp()(key, GetKeyFromValue()(*lower)))
return {unsafe_emplace(lower, std::forward<Args>(args)...), true};
return {lower, false};
}
template <class Key, class Value, class GetKeyFromValue, class KeyCompare>
template <class K, class... Args>
auto flat_tree<Key, Value, GetKeyFromValue, KeyCompare>::emplace_hint_key_args(
const_iterator hint,
const K& key,
Args&&... args) -> std::pair<iterator, bool> {
GetKeyFromValue extractor;
if ((hint == begin() || key_comp()(extractor(*std::prev(hint)), key))) {
if (hint == end() || key_comp()(key, extractor(*hint))) {
// *(hint - 1) < key < *hint => key did not exist and hint is correct.
return {unsafe_emplace(hint, std::forward<Args>(args)...), true};
}
if (!key_comp()(extractor(*hint), key)) {
// key == *hint => no-op, return correct hint.
return {const_cast_it(hint), false};
}
}
// hint was not helpful, dispatch to hintless version.
return emplace_key_args(key, std::forward<Args>(args)...);
}
// For containers like sets, the key is the same as the value. This implements
// the GetKeyFromValue template parameter to flat_tree for this case.
template <class Key>
struct GetKeyFromValueIdentity {
const Key& operator()(const Key& k) const { return k; }
};
} // namespace internal
// ----------------------------------------------------------------------------
// Free functions.
// Erases all elements that match predicate. It has O(size) complexity.
template <class Key,
class Value,
class GetKeyFromValue,
class KeyCompare,
typename Predicate>
size_t EraseIf(
base::internal::flat_tree<Key, Value, GetKeyFromValue, KeyCompare>&
container,
Predicate pred) {
auto it = std::remove_if(container.begin(), container.end(), pred);
size_t removed = std::distance(it, container.end());
container.erase(it, container.end());
return removed;
}
} // namespace base
#endif // BASE_CONTAINERS_FLAT_TREE_H_

View file

@ -0,0 +1,290 @@
// Copyright (c) 2011 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_ID_MAP_H_
#define BASE_CONTAINERS_ID_MAP_H_
#include <stddef.h>
#include <stdint.h>
#include <memory>
#include <set>
#include <type_traits>
#include <unordered_map>
#include <utility>
#include "base/containers/flat_set.h"
#include "base/logging.h"
#include "base/macros.h"
#include "base/sequence_checker.h"
namespace base {
// This object maintains a list of IDs that can be quickly converted to
// pointers to objects. It is implemented as a hash table, optimized for
// relatively small data sets (in the common case, there will be exactly one
// item in the list).
//
// Items can be inserted into the container with arbitrary ID, but the caller
// must ensure they are unique. Inserting IDs and relying on automatically
// generated ones is not allowed because they can collide.
// The map's value type (the V param) can be any dereferenceable type, such as a
// raw pointer or smart pointer
template <typename V, typename K = int32_t>
class IDMap final {
public:
using KeyType = K;
private:
using T = typename std::remove_reference<decltype(*V())>::type;
using HashTable = std::unordered_map<KeyType, V>;
public:
IDMap() : iteration_depth_(0), next_id_(1), check_on_null_data_(false) {
// A number of consumers of IDMap create it on one thread but always
// access it from a different, but consistent, thread (or sequence)
// post-construction. The first call to CalledOnValidSequence() will re-bind
// it.
DETACH_FROM_SEQUENCE(sequence_checker_);
}
~IDMap() {
// Many IDMap's are static, and hence will be destroyed on the main
// thread. However, all the accesses may take place on another thread (or
// sequence), such as the IO thread. Detaching again to clean this up.
DETACH_FROM_SEQUENCE(sequence_checker_);
}
// Sets whether Add and Replace should DCHECK if passed in NULL data.
// Default is false.
void set_check_on_null_data(bool value) { check_on_null_data_ = value; }
// Adds a view with an automatically generated unique ID. See AddWithID.
KeyType Add(V data) { return AddInternal(std::move(data)); }
// Adds a new data member with the specified ID. The ID must not be in
// the list. The caller either must generate all unique IDs itself and use
// this function, or allow this object to generate IDs and call Add. These
// two methods may not be mixed, or duplicate IDs may be generated.
void AddWithID(V data, KeyType id) { AddWithIDInternal(std::move(data), id); }
void Remove(KeyType id) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
typename HashTable::iterator i = data_.find(id);
if (i == data_.end() || IsRemoved(id)) {
NOTREACHED() << "Attempting to remove an item not in the list";
return;
}
if (iteration_depth_ == 0) {
data_.erase(i);
} else {
removed_ids_.insert(id);
}
}
// Replaces the value for |id| with |new_data| and returns the existing value.
// Should only be called with an already added id.
V Replace(KeyType id, V new_data) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DCHECK(!check_on_null_data_ || new_data);
typename HashTable::iterator i = data_.find(id);
DCHECK(i != data_.end());
DCHECK(!IsRemoved(id));
using std::swap;
swap(i->second, new_data);
return new_data;
}
void Clear() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
if (iteration_depth_ == 0) {
data_.clear();
} else {
removed_ids_.reserve(data_.size());
removed_ids_.insert(KeyIterator(data_.begin()), KeyIterator(data_.end()));
}
}
bool IsEmpty() const {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
return size() == 0u;
}
T* Lookup(KeyType id) const {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
typename HashTable::const_iterator i = data_.find(id);
if (i == data_.end() || !i->second || IsRemoved(id))
return nullptr;
return &*i->second;
}
size_t size() const {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
return data_.size() - removed_ids_.size();
}
#if defined(UNIT_TEST)
int iteration_depth() const {
return iteration_depth_;
}
#endif // defined(UNIT_TEST)
// It is safe to remove elements from the map during iteration. All iterators
// will remain valid.
template<class ReturnType>
class Iterator {
public:
Iterator(IDMap<V, K>* map) : map_(map), iter_(map_->data_.begin()) {
Init();
}
Iterator(const Iterator& iter)
: map_(iter.map_),
iter_(iter.iter_) {
Init();
}
const Iterator& operator=(const Iterator& iter) {
map_ = iter.map;
iter_ = iter.iter;
Init();
return *this;
}
~Iterator() {
DCHECK_CALLED_ON_VALID_SEQUENCE(map_->sequence_checker_);
// We're going to decrement iteration depth. Make sure it's greater than
// zero so that it doesn't become negative.
DCHECK_LT(0, map_->iteration_depth_);
if (--map_->iteration_depth_ == 0)
map_->Compact();
}
bool IsAtEnd() const {
DCHECK_CALLED_ON_VALID_SEQUENCE(map_->sequence_checker_);
return iter_ == map_->data_.end();
}
KeyType GetCurrentKey() const {
DCHECK_CALLED_ON_VALID_SEQUENCE(map_->sequence_checker_);
return iter_->first;
}
ReturnType* GetCurrentValue() const {
DCHECK_CALLED_ON_VALID_SEQUENCE(map_->sequence_checker_);
if (!iter_->second || map_->IsRemoved(iter_->first))
return nullptr;
return &*iter_->second;
}
void Advance() {
DCHECK_CALLED_ON_VALID_SEQUENCE(map_->sequence_checker_);
++iter_;
SkipRemovedEntries();
}
private:
void Init() {
DCHECK_CALLED_ON_VALID_SEQUENCE(map_->sequence_checker_);
++map_->iteration_depth_;
SkipRemovedEntries();
}
void SkipRemovedEntries() {
while (iter_ != map_->data_.end() && map_->IsRemoved(iter_->first))
++iter_;
}
IDMap<V, K>* map_;
typename HashTable::const_iterator iter_;
};
typedef Iterator<T> iterator;
typedef Iterator<const T> const_iterator;
private:
// Transforms a map iterator to an iterator on the keys of the map.
// Used by Clear() to populate |removed_ids_| in bulk.
struct KeyIterator : std::iterator<std::forward_iterator_tag, KeyType> {
using inner_iterator = typename HashTable::iterator;
inner_iterator iter_;
KeyIterator(inner_iterator iter) : iter_(iter) {}
KeyType operator*() const { return iter_->first; }
KeyIterator& operator++() {
++iter_;
return *this;
}
KeyIterator operator++(int) { return KeyIterator(iter_++); }
bool operator==(const KeyIterator& other) const {
return iter_ == other.iter_;
}
bool operator!=(const KeyIterator& other) const {
return iter_ != other.iter_;
}
};
KeyType AddInternal(V data) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DCHECK(!check_on_null_data_ || data);
KeyType this_id = next_id_;
DCHECK(data_.find(this_id) == data_.end()) << "Inserting duplicate item";
data_[this_id] = std::move(data);
next_id_++;
return this_id;
}
void AddWithIDInternal(V data, KeyType id) {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
DCHECK(!check_on_null_data_ || data);
if (IsRemoved(id)) {
removed_ids_.erase(id);
} else {
DCHECK(data_.find(id) == data_.end()) << "Inserting duplicate item";
}
data_[id] = std::move(data);
}
bool IsRemoved(KeyType key) const {
return removed_ids_.find(key) != removed_ids_.end();
}
void Compact() {
DCHECK_EQ(0, iteration_depth_);
for (const auto& i : removed_ids_)
data_.erase(i);
removed_ids_.clear();
}
// Keep track of how many iterators are currently iterating on us to safely
// handle removing items during iteration.
int iteration_depth_;
// Keep set of IDs that should be removed after the outermost iteration has
// finished. This way we manage to not invalidate the iterator when an element
// is removed.
base::flat_set<KeyType> removed_ids_;
// The next ID that we will return from Add()
KeyType next_id_;
HashTable data_;
// See description above setter.
bool check_on_null_data_;
SEQUENCE_CHECKER(sequence_checker_);
DISALLOW_COPY_AND_ASSIGN(IDMap);
};
} // namespace base
#endif // BASE_CONTAINERS_ID_MAP_H_

View file

@ -0,0 +1,46 @@
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "base/containers/intrusive_heap.h"
#include "base/logging.h"
#include "base/memory/ptr_util.h"
namespace base {
////////////////////////////////////////////////////////////////////////////////
// HeapHandle
// static
HeapHandle HeapHandle::Invalid() {
return HeapHandle();
}
////////////////////////////////////////////////////////////////////////////////
// InternalHeapHandleStorage
InternalHeapHandleStorage::InternalHeapHandleStorage()
: handle_(new HeapHandle()) {}
InternalHeapHandleStorage::InternalHeapHandleStorage(
InternalHeapHandleStorage&& other) noexcept
: handle_(std::move(other.handle_)) {
DCHECK(intrusive_heap::IsInvalid(other.handle_));
}
InternalHeapHandleStorage::~InternalHeapHandleStorage() = default;
InternalHeapHandleStorage& InternalHeapHandleStorage::operator=(
InternalHeapHandleStorage&& other) noexcept {
handle_ = std::move(other.handle_);
DCHECK(intrusive_heap::IsInvalid(other.handle_));
return *this;
}
void InternalHeapHandleStorage::swap(
InternalHeapHandleStorage& other) noexcept {
std::swap(handle_, other.handle_);
}
} // namespace base

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,190 @@
// Copyright (c) 2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_LINKED_LIST_H_
#define BASE_CONTAINERS_LINKED_LIST_H_
#include "base/macros.h"
// Simple LinkedList type. (See the Q&A section to understand how this
// differs from std::list).
//
// To use, start by declaring the class which will be contained in the linked
// list, as extending LinkNode (this gives it next/previous pointers).
//
// class MyNodeType : public LinkNode<MyNodeType> {
// ...
// };
//
// Next, to keep track of the list's head/tail, use a LinkedList instance:
//
// LinkedList<MyNodeType> list;
//
// To add elements to the list, use any of LinkedList::Append,
// LinkNode::InsertBefore, or LinkNode::InsertAfter:
//
// LinkNode<MyNodeType>* n1 = ...;
// LinkNode<MyNodeType>* n2 = ...;
// LinkNode<MyNodeType>* n3 = ...;
//
// list.Append(n1);
// list.Append(n3);
// n3->InsertBefore(n3);
//
// Lastly, to iterate through the linked list forwards:
//
// for (LinkNode<MyNodeType>* node = list.head();
// node != list.end();
// node = node->next()) {
// MyNodeType* value = node->value();
// ...
// }
//
// Or to iterate the linked list backwards:
//
// for (LinkNode<MyNodeType>* node = list.tail();
// node != list.end();
// node = node->previous()) {
// MyNodeType* value = node->value();
// ...
// }
//
// Questions and Answers:
//
// Q. Should I use std::list or base::LinkedList?
//
// A. The main reason to use base::LinkedList over std::list is
// performance. If you don't care about the performance differences
// then use an STL container, as it makes for better code readability.
//
// Comparing the performance of base::LinkedList<T> to std::list<T*>:
//
// * Erasing an element of type T* from base::LinkedList<T> is
// an O(1) operation. Whereas for std::list<T*> it is O(n).
// That is because with std::list<T*> you must obtain an
// iterator to the T* element before you can call erase(iterator).
//
// * Insertion operations with base::LinkedList<T> never require
// heap allocations.
//
// Q. How does base::LinkedList implementation differ from std::list?
//
// A. Doubly-linked lists are made up of nodes that contain "next" and
// "previous" pointers that reference other nodes in the list.
//
// With base::LinkedList<T>, the type being inserted already reserves
// space for the "next" and "previous" pointers (base::LinkNode<T>*).
// Whereas with std::list<T> the type can be anything, so the implementation
// needs to glue on the "next" and "previous" pointers using
// some internal node type.
namespace base {
template <typename T>
class LinkNode {
public:
LinkNode() : previous_(nullptr), next_(nullptr) {}
LinkNode(LinkNode<T>* previous, LinkNode<T>* next)
: previous_(previous), next_(next) {}
LinkNode(LinkNode<T>&& rhs) {
next_ = rhs.next_;
rhs.next_ = nullptr;
previous_ = rhs.previous_;
rhs.previous_ = nullptr;
// If the node belongs to a list, next_ and previous_ are both non-null.
// Otherwise, they are both null.
if (next_) {
next_->previous_ = this;
previous_->next_ = this;
}
}
// Insert |this| into the linked list, before |e|.
void InsertBefore(LinkNode<T>* e) {
this->next_ = e;
this->previous_ = e->previous_;
e->previous_->next_ = this;
e->previous_ = this;
}
// Insert |this| into the linked list, after |e|.
void InsertAfter(LinkNode<T>* e) {
this->next_ = e->next_;
this->previous_ = e;
e->next_->previous_ = this;
e->next_ = this;
}
// Remove |this| from the linked list.
void RemoveFromList() {
this->previous_->next_ = this->next_;
this->next_->previous_ = this->previous_;
// next() and previous() return non-null if and only this node is not in any
// list.
this->next_ = nullptr;
this->previous_ = nullptr;
}
LinkNode<T>* previous() const {
return previous_;
}
LinkNode<T>* next() const {
return next_;
}
// Cast from the node-type to the value type.
const T* value() const {
return static_cast<const T*>(this);
}
T* value() {
return static_cast<T*>(this);
}
private:
LinkNode<T>* previous_;
LinkNode<T>* next_;
DISALLOW_COPY_AND_ASSIGN(LinkNode);
};
template <typename T>
class LinkedList {
public:
// The "root" node is self-referential, and forms the basis of a circular
// list (root_.next() will point back to the start of the list,
// and root_->previous() wraps around to the end of the list).
LinkedList() : root_(&root_, &root_) {}
// Appends |e| to the end of the linked list.
void Append(LinkNode<T>* e) {
e->InsertBefore(&root_);
}
LinkNode<T>* head() const {
return root_.next();
}
LinkNode<T>* tail() const {
return root_.previous();
}
const LinkNode<T>* end() const {
return &root_;
}
bool empty() const { return head() == end(); }
private:
LinkNode<T> root_;
DISALLOW_COPY_AND_ASSIGN(LinkedList);
};
} // namespace base
#endif // BASE_CONTAINERS_LINKED_LIST_H_

View file

@ -0,0 +1,268 @@
// Copyright (c) 2011 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// This file contains a template for a Most Recently Used cache that allows
// constant-time access to items using a key, but easy identification of the
// least-recently-used items for removal. Each key can only be associated with
// one payload item at a time.
//
// The key object will be stored twice, so it should support efficient copying.
//
// NOTE: While all operations are O(1), this code is written for
// legibility rather than optimality. If future profiling identifies this as
// a bottleneck, there is room for smaller values of 1 in the O(1). :]
#ifndef BASE_CONTAINERS_MRU_CACHE_H_
#define BASE_CONTAINERS_MRU_CACHE_H_
#include <stddef.h>
#include <algorithm>
#include <functional>
#include <list>
#include <map>
#include <unordered_map>
#include <utility>
#include "base/logging.h"
#include "base/macros.h"
namespace base {
namespace trace_event {
namespace internal {
template <class MruCacheType>
size_t DoEstimateMemoryUsageForMruCache(const MruCacheType&);
} // namespace internal
} // namespace trace_event
// MRUCacheBase ----------------------------------------------------------------
// This template is used to standardize map type containers that can be used
// by MRUCacheBase. This level of indirection is necessary because of the way
// that template template params and default template params interact.
template <class KeyType, class ValueType, class CompareType>
struct MRUCacheStandardMap {
typedef std::map<KeyType, ValueType, CompareType> Type;
};
// Base class for the MRU cache specializations defined below.
template <class KeyType,
class PayloadType,
class HashOrCompareType,
template <typename, typename, typename> class MapType =
MRUCacheStandardMap>
class MRUCacheBase {
public:
// The payload of the list. This maintains a copy of the key so we can
// efficiently delete things given an element of the list.
typedef std::pair<KeyType, PayloadType> value_type;
private:
typedef std::list<value_type> PayloadList;
typedef typename MapType<KeyType,
typename PayloadList::iterator,
HashOrCompareType>::Type KeyIndex;
public:
typedef typename PayloadList::size_type size_type;
typedef typename PayloadList::iterator iterator;
typedef typename PayloadList::const_iterator const_iterator;
typedef typename PayloadList::reverse_iterator reverse_iterator;
typedef typename PayloadList::const_reverse_iterator const_reverse_iterator;
enum { NO_AUTO_EVICT = 0 };
// The max_size is the size at which the cache will prune its members to when
// a new item is inserted. If the caller wants to manager this itself (for
// example, maybe it has special work to do when something is evicted), it
// can pass NO_AUTO_EVICT to not restrict the cache size.
explicit MRUCacheBase(size_type max_size) : max_size_(max_size) {}
virtual ~MRUCacheBase() = default;
size_type max_size() const { return max_size_; }
// Inserts a payload item with the given key. If an existing item has
// the same key, it is removed prior to insertion. An iterator indicating the
// inserted item will be returned (this will always be the front of the list).
//
// The payload will be forwarded.
template <typename Payload>
iterator Put(const KeyType& key, Payload&& payload) {
// Remove any existing payload with that key.
typename KeyIndex::iterator index_iter = index_.find(key);
if (index_iter != index_.end()) {
// Erase the reference to it. The index reference will be replaced in the
// code below.
Erase(index_iter->second);
} else if (max_size_ != NO_AUTO_EVICT) {
// New item is being inserted which might make it larger than the maximum
// size: kick the oldest thing out if necessary.
ShrinkToSize(max_size_ - 1);
}
ordering_.emplace_front(key, std::forward<Payload>(payload));
index_.emplace(key, ordering_.begin());
return ordering_.begin();
}
// Retrieves the contents of the given key, or end() if not found. This method
// has the side effect of moving the requested item to the front of the
// recency list.
iterator Get(const KeyType& key) {
typename KeyIndex::iterator index_iter = index_.find(key);
if (index_iter == index_.end())
return end();
typename PayloadList::iterator iter = index_iter->second;
// Move the touched item to the front of the recency ordering.
ordering_.splice(ordering_.begin(), ordering_, iter);
return ordering_.begin();
}
// Retrieves the payload associated with a given key and returns it via
// result without affecting the ordering (unlike Get).
iterator Peek(const KeyType& key) {
typename KeyIndex::const_iterator index_iter = index_.find(key);
if (index_iter == index_.end())
return end();
return index_iter->second;
}
const_iterator Peek(const KeyType& key) const {
typename KeyIndex::const_iterator index_iter = index_.find(key);
if (index_iter == index_.end())
return end();
return index_iter->second;
}
// Exchanges the contents of |this| by the contents of the |other|.
void Swap(MRUCacheBase& other) {
ordering_.swap(other.ordering_);
index_.swap(other.index_);
std::swap(max_size_, other.max_size_);
}
// Erases the item referenced by the given iterator. An iterator to the item
// following it will be returned. The iterator must be valid.
iterator Erase(iterator pos) {
index_.erase(pos->first);
return ordering_.erase(pos);
}
// MRUCache entries are often processed in reverse order, so we add this
// convenience function (not typically defined by STL containers).
reverse_iterator Erase(reverse_iterator pos) {
// We have to actually give it the incremented iterator to delete, since
// the forward iterator that base() returns is actually one past the item
// being iterated over.
return reverse_iterator(Erase((++pos).base()));
}
// Shrinks the cache so it only holds |new_size| items. If |new_size| is
// bigger or equal to the current number of items, this will do nothing.
void ShrinkToSize(size_type new_size) {
for (size_type i = size(); i > new_size; i--)
Erase(rbegin());
}
// Deletes everything from the cache.
void Clear() {
index_.clear();
ordering_.clear();
}
// Returns the number of elements in the cache.
size_type size() const {
// We don't use ordering_.size() for the return value because
// (as a linked list) it can be O(n).
DCHECK(index_.size() == ordering_.size());
return index_.size();
}
// Allows iteration over the list. Forward iteration starts with the most
// recent item and works backwards.
//
// Note that since these iterators are actually iterators over a list, you
// can keep them as you insert or delete things (as long as you don't delete
// the one you are pointing to) and they will still be valid.
iterator begin() { return ordering_.begin(); }
const_iterator begin() const { return ordering_.begin(); }
iterator end() { return ordering_.end(); }
const_iterator end() const { return ordering_.end(); }
reverse_iterator rbegin() { return ordering_.rbegin(); }
const_reverse_iterator rbegin() const { return ordering_.rbegin(); }
reverse_iterator rend() { return ordering_.rend(); }
const_reverse_iterator rend() const { return ordering_.rend(); }
bool empty() const { return ordering_.empty(); }
private:
template <class MruCacheType>
friend size_t trace_event::internal::DoEstimateMemoryUsageForMruCache(
const MruCacheType&);
PayloadList ordering_;
KeyIndex index_;
size_type max_size_;
DISALLOW_COPY_AND_ASSIGN(MRUCacheBase);
};
// MRUCache --------------------------------------------------------------------
// A container that does not do anything to free its data. Use this when storing
// value types (as opposed to pointers) in the list.
template <class KeyType,
class PayloadType,
class CompareType = std::less<KeyType>>
class MRUCache : public MRUCacheBase<KeyType, PayloadType, CompareType> {
private:
using ParentType = MRUCacheBase<KeyType, PayloadType, CompareType>;
public:
// See MRUCacheBase, noting the possibility of using NO_AUTO_EVICT.
explicit MRUCache(typename ParentType::size_type max_size)
: ParentType(max_size) {}
virtual ~MRUCache() = default;
private:
DISALLOW_COPY_AND_ASSIGN(MRUCache);
};
// HashingMRUCache ------------------------------------------------------------
template <class KeyType, class ValueType, class HashType>
struct MRUCacheHashMap {
typedef std::unordered_map<KeyType, ValueType, HashType> Type;
};
// This class is similar to MRUCache, except that it uses std::unordered_map as
// the map type instead of std::map. Note that your KeyType must be hashable to
// use this cache or you need to provide a hashing class.
template <class KeyType, class PayloadType, class HashType = std::hash<KeyType>>
class HashingMRUCache
: public MRUCacheBase<KeyType, PayloadType, HashType, MRUCacheHashMap> {
private:
using ParentType =
MRUCacheBase<KeyType, PayloadType, HashType, MRUCacheHashMap>;
public:
// See MRUCacheBase, noting the possibility of using NO_AUTO_EVICT.
explicit HashingMRUCache(typename ParentType::size_type max_size)
: ParentType(max_size) {}
virtual ~HashingMRUCache() = default;
private:
DISALLOW_COPY_AND_ASSIGN(HashingMRUCache);
};
} // namespace base
#endif // BASE_CONTAINERS_MRU_CACHE_H_

View file

@ -0,0 +1,23 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_QUEUE_H_
#define BASE_CONTAINERS_QUEUE_H_
#include <queue>
#include "base/containers/circular_deque.h"
namespace base {
// Provides a definition of base::queue that's like std::queue but uses a
// base::circular_deque instead of std::deque. Since std::queue is just a
// wrapper for an underlying type, we can just provide a typedef for it that
// defaults to the base circular_deque.
template <class T, class Container = circular_deque<T>>
using queue = std::queue<T, Container>;
} // namespace base
#endif // BASE_CONTAINERS_QUEUE_H_

View file

@ -0,0 +1,133 @@
// Copyright 2013 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_RING_BUFFER_H_
#define BASE_CONTAINERS_RING_BUFFER_H_
#include <stddef.h>
#include "base/logging.h"
#include "base/macros.h"
namespace base {
// base::RingBuffer uses a fixed-size array, unlike base::circular_deque and
// std::deque, and so, one can access only the last |kSize| elements. Also, you
// can add elements to the front and read/modify random elements, but cannot
// remove elements from the back. Therefore, it does not have a |Size| method,
// only |BufferSize|, which is a constant, and |CurrentIndex|, which is the
// number of elements added so far.
//
// If the above is sufficient for your use case, base::RingBuffer should be more
// efficient than base::circular_deque.
template <typename T, size_t kSize>
class RingBuffer {
public:
RingBuffer() : current_index_(0) {}
size_t BufferSize() const { return kSize; }
size_t CurrentIndex() const { return current_index_; }
// Returns true if a value was saved to index |n|.
bool IsFilledIndex(size_t n) const {
return IsFilledIndexByBufferIndex(BufferIndex(n));
}
// Returns the element at index |n| (% |kSize|).
//
// n = 0 returns the oldest value and
// n = bufferSize() - 1 returns the most recent value.
const T& ReadBuffer(size_t n) const {
const size_t buffer_index = BufferIndex(n);
CHECK(IsFilledIndexByBufferIndex(buffer_index));
return buffer_[buffer_index];
}
T* MutableReadBuffer(size_t n) {
const size_t buffer_index = BufferIndex(n);
CHECK(IsFilledIndexByBufferIndex(buffer_index));
return &buffer_[buffer_index];
}
void SaveToBuffer(const T& value) {
buffer_[BufferIndex(0)] = value;
current_index_++;
}
void Clear() { current_index_ = 0; }
// Iterator has const access to the RingBuffer it got retrieved from.
class Iterator {
public:
size_t index() const { return index_; }
const T* operator->() const { return &buffer_.ReadBuffer(index_); }
const T* operator*() const { return &buffer_.ReadBuffer(index_); }
Iterator& operator++() {
index_++;
if (index_ == kSize)
out_of_range_ = true;
return *this;
}
Iterator& operator--() {
if (index_ == 0)
out_of_range_ = true;
index_--;
return *this;
}
operator bool() const {
return !out_of_range_ && buffer_.IsFilledIndex(index_);
}
private:
Iterator(const RingBuffer<T, kSize>& buffer, size_t index)
: buffer_(buffer), index_(index), out_of_range_(false) {}
const RingBuffer<T, kSize>& buffer_;
size_t index_;
bool out_of_range_;
friend class RingBuffer<T, kSize>;
};
// Returns an Iterator pointing to the oldest value in the buffer.
// Example usage (iterate from oldest to newest value):
// for (RingBuffer<T, kSize>::Iterator it = ring_buffer.Begin(); it; ++it) {}
Iterator Begin() const {
if (current_index_ < kSize)
return Iterator(*this, kSize - current_index_);
return Iterator(*this, 0);
}
// Returns an Iterator pointing to the newest value in the buffer.
// Example usage (iterate backwards from newest to oldest value):
// for (RingBuffer<T, kSize>::Iterator it = ring_buffer.End(); it; --it) {}
Iterator End() const { return Iterator(*this, kSize - 1); }
private:
inline size_t BufferIndex(size_t n) const {
return (current_index_ + n) % kSize;
}
// This specialization of |IsFilledIndex| is a micro-optimization that enables
// us to do e.g. `CHECK(IsFilledIndex(n))` without calling |BufferIndex|
// twice. Since |BufferIndex| involves a % operation, it's not quite free at a
// micro-scale.
inline bool IsFilledIndexByBufferIndex(size_t buffer_index) const {
return buffer_index < current_index_;
}
T buffer_[kSize];
size_t current_index_;
DISALLOW_COPY_AND_ASSIGN(RingBuffer);
};
} // namespace base
#endif // BASE_CONTAINERS_RING_BUFFER_H_

View file

@ -0,0 +1,627 @@
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_SMALL_MAP_H_
#define BASE_CONTAINERS_SMALL_MAP_H_
#include <stddef.h>
#include <limits>
#include <map>
#include <new>
#include <string>
#include <unordered_map>
#include <utility>
#include "base/logging.h"
namespace {
constexpr size_t kUsingFullMapSentinel = std::numeric_limits<size_t>::max();
} // namespace
namespace base {
// small_map is a container with a std::map-like interface. It starts out backed
// by an unsorted array but switches to some other container type if it grows
// beyond this fixed size.
//
// Please see //base/containers/README.md for an overview of which container
// to select.
//
// PROS
//
// - Good memory locality and low overhead for smaller maps.
// - Handles large maps without the degenerate performance of flat_map.
//
// CONS
//
// - Larger code size than the alternatives.
//
// IMPORTANT NOTES
//
// - Iterators are invalidated across mutations.
//
// DETAILS
//
// base::small_map will pick up the comparator from the underlying map type. In
// std::map only a "less" operator is defined, which requires us to do two
// comparisons per element when doing the brute-force search in the simple
// array. std::unordered_map has a key_equal function which will be used.
//
// We define default overrides for the common map types to avoid this
// double-compare, but you should be aware of this if you use your own operator<
// for your map and supply yor own version of == to the small_map. You can use
// regular operator== by just doing:
//
// base::small_map<std::map<MyKey, MyValue>, 4, std::equal_to<KyKey>>
//
//
// USAGE
// -----
//
// NormalMap: The map type to fall back to. This also defines the key and value
// types for the small_map.
// kArraySize: The size of the initial array of results. This will be allocated
// with the small_map object rather than separately on the heap.
// Once the map grows beyond this size, the map type will be used
// instead.
// EqualKey: A functor which tests two keys for equality. If the wrapped map
// type has a "key_equal" member (unordered_map does), then that will
// be used by default. If the wrapped map type has a strict weak
// ordering "key_compare" (std::map does), that will be used to
// implement equality by default.
// MapInit: A functor that takes a NormalMap* and uses it to initialize the map.
// This functor will be called at most once per small_map, when the map
// exceeds the threshold of kArraySize and we are about to copy values
// from the array to the map. The functor *must* initialize the
// NormalMap* argument with placement new, since after it runs we
// assume that the NormalMap has been initialized.
//
// Example:
// base::small_map<std::map<string, int>> days;
// days["sunday" ] = 0;
// days["monday" ] = 1;
// days["tuesday" ] = 2;
// days["wednesday"] = 3;
// days["thursday" ] = 4;
// days["friday" ] = 5;
// days["saturday" ] = 6;
namespace internal {
template <typename NormalMap>
class small_map_default_init {
public:
void operator()(NormalMap* map) const { new (map) NormalMap(); }
};
// has_key_equal<M>::value is true iff there exists a type M::key_equal. This is
// used to dispatch to one of the select_equal_key<> metafunctions below.
template <typename M>
struct has_key_equal {
typedef char sml; // "small" is sometimes #defined so we use an abbreviation.
typedef struct { char dummy[2]; } big;
// Two functions, one accepts types that have a key_equal member, and one that
// accepts anything. They each return a value of a different size, so we can
// determine at compile-time which function would have been called.
template <typename U> static big test(typename U::key_equal*);
template <typename> static sml test(...);
// Determines if M::key_equal exists by looking at the size of the return
// type of the compiler-chosen test() function.
static const bool value = (sizeof(test<M>(0)) == sizeof(big));
};
template <typename M> const bool has_key_equal<M>::value;
// Base template used for map types that do NOT have an M::key_equal member,
// e.g., std::map<>. These maps have a strict weak ordering comparator rather
// than an equality functor, so equality will be implemented in terms of that
// comparator.
//
// There's a partial specialization of this template below for map types that do
// have an M::key_equal member.
template <typename M, bool has_key_equal_value>
struct select_equal_key {
struct equal_key {
bool operator()(const typename M::key_type& left,
const typename M::key_type& right) {
// Implements equality in terms of a strict weak ordering comparator.
typename M::key_compare comp;
return !comp(left, right) && !comp(right, left);
}
};
};
// Partial template specialization handles case where M::key_equal exists, e.g.,
// unordered_map<>.
template <typename M>
struct select_equal_key<M, true> {
typedef typename M::key_equal equal_key;
};
} // namespace internal
template <typename NormalMap,
size_t kArraySize = 4,
typename EqualKey = typename internal::select_equal_key<
NormalMap,
internal::has_key_equal<NormalMap>::value>::equal_key,
typename MapInit = internal::small_map_default_init<NormalMap>>
class small_map {
static_assert(kArraySize > 0, "Initial size must be greater than 0");
static_assert(kArraySize != kUsingFullMapSentinel,
"Initial size out of range");
public:
typedef typename NormalMap::key_type key_type;
typedef typename NormalMap::mapped_type data_type;
typedef typename NormalMap::mapped_type mapped_type;
typedef typename NormalMap::value_type value_type;
typedef EqualKey key_equal;
small_map() : size_(0), functor_(MapInit()) {}
explicit small_map(const MapInit& functor) : size_(0), functor_(functor) {}
// Allow copy-constructor and assignment, since STL allows them too.
small_map(const small_map& src) {
// size_ and functor_ are initted in InitFrom()
InitFrom(src);
}
void operator=(const small_map& src) {
if (&src == this) return;
// This is not optimal. If src and dest are both using the small array, we
// could skip the teardown and reconstruct. One problem to be resolved is
// that the value_type itself is pair<const K, V>, and const K is not
// assignable.
Destroy();
InitFrom(src);
}
~small_map() { Destroy(); }
class const_iterator;
class iterator {
public:
typedef typename NormalMap::iterator::iterator_category iterator_category;
typedef typename NormalMap::iterator::value_type value_type;
typedef typename NormalMap::iterator::difference_type difference_type;
typedef typename NormalMap::iterator::pointer pointer;
typedef typename NormalMap::iterator::reference reference;
inline iterator() : array_iter_(nullptr) {}
inline iterator& operator++() {
if (array_iter_ != nullptr) {
++array_iter_;
} else {
++map_iter_;
}
return *this;
}
inline iterator operator++(int /*unused*/) {
iterator result(*this);
++(*this);
return result;
}
inline iterator& operator--() {
if (array_iter_ != nullptr) {
--array_iter_;
} else {
--map_iter_;
}
return *this;
}
inline iterator operator--(int /*unused*/) {
iterator result(*this);
--(*this);
return result;
}
inline value_type* operator->() const {
return array_iter_ ? array_iter_ : map_iter_.operator->();
}
inline value_type& operator*() const {
return array_iter_ ? *array_iter_ : *map_iter_;
}
inline bool operator==(const iterator& other) const {
if (array_iter_ != nullptr) {
return array_iter_ == other.array_iter_;
} else {
return other.array_iter_ == nullptr && map_iter_ == other.map_iter_;
}
}
inline bool operator!=(const iterator& other) const {
return !(*this == other);
}
bool operator==(const const_iterator& other) const;
bool operator!=(const const_iterator& other) const;
private:
friend class small_map;
friend class const_iterator;
inline explicit iterator(value_type* init) : array_iter_(init) {}
inline explicit iterator(const typename NormalMap::iterator& init)
: array_iter_(nullptr), map_iter_(init) {}
value_type* array_iter_;
typename NormalMap::iterator map_iter_;
};
class const_iterator {
public:
typedef typename NormalMap::const_iterator::iterator_category
iterator_category;
typedef typename NormalMap::const_iterator::value_type value_type;
typedef typename NormalMap::const_iterator::difference_type difference_type;
typedef typename NormalMap::const_iterator::pointer pointer;
typedef typename NormalMap::const_iterator::reference reference;
inline const_iterator() : array_iter_(nullptr) {}
// Non-explicit constructor lets us convert regular iterators to const
// iterators.
inline const_iterator(const iterator& other)
: array_iter_(other.array_iter_), map_iter_(other.map_iter_) {}
inline const_iterator& operator++() {
if (array_iter_ != nullptr) {
++array_iter_;
} else {
++map_iter_;
}
return *this;
}
inline const_iterator operator++(int /*unused*/) {
const_iterator result(*this);
++(*this);
return result;
}
inline const_iterator& operator--() {
if (array_iter_ != nullptr) {
--array_iter_;
} else {
--map_iter_;
}
return *this;
}
inline const_iterator operator--(int /*unused*/) {
const_iterator result(*this);
--(*this);
return result;
}
inline const value_type* operator->() const {
return array_iter_ ? array_iter_ : map_iter_.operator->();
}
inline const value_type& operator*() const {
return array_iter_ ? *array_iter_ : *map_iter_;
}
inline bool operator==(const const_iterator& other) const {
if (array_iter_ != nullptr) {
return array_iter_ == other.array_iter_;
}
return other.array_iter_ == nullptr && map_iter_ == other.map_iter_;
}
inline bool operator!=(const const_iterator& other) const {
return !(*this == other);
}
private:
friend class small_map;
inline explicit const_iterator(const value_type* init)
: array_iter_(init) {}
inline explicit const_iterator(
const typename NormalMap::const_iterator& init)
: array_iter_(nullptr), map_iter_(init) {}
const value_type* array_iter_;
typename NormalMap::const_iterator map_iter_;
};
iterator find(const key_type& key) {
key_equal compare;
if (UsingFullMap()) {
return iterator(map()->find(key));
}
for (size_t i = 0; i < size_; ++i) {
if (compare(array_[i].first, key)) {
return iterator(array_ + i);
}
}
return iterator(array_ + size_);
}
const_iterator find(const key_type& key) const {
key_equal compare;
if (UsingFullMap()) {
return const_iterator(map()->find(key));
}
for (size_t i = 0; i < size_; ++i) {
if (compare(array_[i].first, key)) {
return const_iterator(array_ + i);
}
}
return const_iterator(array_ + size_);
}
// Invalidates iterators.
data_type& operator[](const key_type& key) {
key_equal compare;
if (UsingFullMap()) {
return map_[key];
}
// Search backwards to favor recently-added elements.
for (size_t i = size_; i > 0; --i) {
const size_t index = i - 1;
if (compare(array_[index].first, key)) {
return array_[index].second;
}
}
if (size_ == kArraySize) {
ConvertToRealMap();
return map_[key];
}
DCHECK(size_ < kArraySize);
new (&array_[size_]) value_type(key, data_type());
return array_[size_++].second;
}
// Invalidates iterators.
std::pair<iterator, bool> insert(const value_type& x) {
key_equal compare;
if (UsingFullMap()) {
std::pair<typename NormalMap::iterator, bool> ret = map_.insert(x);
return std::make_pair(iterator(ret.first), ret.second);
}
for (size_t i = 0; i < size_; ++i) {
if (compare(array_[i].first, x.first)) {
return std::make_pair(iterator(array_ + i), false);
}
}
if (size_ == kArraySize) {
ConvertToRealMap(); // Invalidates all iterators!
std::pair<typename NormalMap::iterator, bool> ret = map_.insert(x);
return std::make_pair(iterator(ret.first), ret.second);
}
DCHECK(size_ < kArraySize);
new (&array_[size_]) value_type(x);
return std::make_pair(iterator(array_ + size_++), true);
}
// Invalidates iterators.
template <class InputIterator>
void insert(InputIterator f, InputIterator l) {
while (f != l) {
insert(*f);
++f;
}
}
// Invalidates iterators.
template <typename... Args>
std::pair<iterator, bool> emplace(Args&&... args) {
key_equal compare;
if (UsingFullMap()) {
std::pair<typename NormalMap::iterator, bool> ret =
map_.emplace(std::forward<Args>(args)...);
return std::make_pair(iterator(ret.first), ret.second);
}
value_type x(std::forward<Args>(args)...);
for (size_t i = 0; i < size_; ++i) {
if (compare(array_[i].first, x.first)) {
return std::make_pair(iterator(array_ + i), false);
}
}
if (size_ == kArraySize) {
ConvertToRealMap(); // Invalidates all iterators!
std::pair<typename NormalMap::iterator, bool> ret =
map_.emplace(std::move(x));
return std::make_pair(iterator(ret.first), ret.second);
}
DCHECK(size_ < kArraySize);
new (&array_[size_]) value_type(std::move(x));
return std::make_pair(iterator(array_ + size_++), true);
}
iterator begin() {
return UsingFullMap() ? iterator(map_.begin()) : iterator(array_);
}
const_iterator begin() const {
return UsingFullMap() ? const_iterator(map_.begin())
: const_iterator(array_);
}
iterator end() {
return UsingFullMap() ? iterator(map_.end()) : iterator(array_ + size_);
}
const_iterator end() const {
return UsingFullMap() ? const_iterator(map_.end())
: const_iterator(array_ + size_);
}
void clear() {
if (UsingFullMap()) {
map_.~NormalMap();
} else {
for (size_t i = 0; i < size_; ++i) {
array_[i].~value_type();
}
}
size_ = 0;
}
// Invalidates iterators. Returns iterator following the last removed element.
iterator erase(const iterator& position) {
if (UsingFullMap()) {
return iterator(map_.erase(position.map_iter_));
}
size_t i = position.array_iter_ - array_;
// TODO(crbug.com/817982): When we have a checked iterator, this CHECK might
// not be necessary.
CHECK_LE(i, size_);
array_[i].~value_type();
--size_;
if (i != size_) {
new (&array_[i]) value_type(std::move(array_[size_]));
array_[size_].~value_type();
return iterator(array_ + i);
}
return end();
}
size_t erase(const key_type& key) {
iterator iter = find(key);
if (iter == end()) {
return 0;
}
erase(iter);
return 1;
}
size_t count(const key_type& key) const {
return (find(key) == end()) ? 0 : 1;
}
size_t size() const { return UsingFullMap() ? map_.size() : size_; }
bool empty() const { return UsingFullMap() ? map_.empty() : size_ == 0; }
// Returns true if we have fallen back to using the underlying map
// representation.
bool UsingFullMap() const { return size_ == kUsingFullMapSentinel; }
inline NormalMap* map() {
CHECK(UsingFullMap());
return &map_;
}
inline const NormalMap* map() const {
CHECK(UsingFullMap());
return &map_;
}
private:
// When `size_ == kUsingFullMapSentinel`, we have switched storage strategies
// from `array_[kArraySize] to `NormalMap map_`. See ConvertToRealMap and
// UsingFullMap.
size_t size_;
MapInit functor_;
// We want to call constructors and destructors manually, but we don't want
// to allocate and deallocate the memory used for them separately. Since
// array_ and map_ are mutually exclusive, we'll put them in a union.
union {
value_type array_[kArraySize];
NormalMap map_;
};
void ConvertToRealMap() {
// Storage for the elements in the temporary array. This is intentionally
// declared as a union to avoid having to default-construct |kArraySize|
// elements, only to move construct over them in the initial loop.
union Storage {
Storage() {}
~Storage() {}
value_type array[kArraySize];
} temp;
// Move the current elements into a temporary array.
for (size_t i = 0; i < kArraySize; ++i) {
new (&temp.array[i]) value_type(std::move(array_[i]));
array_[i].~value_type();
}
// Initialize the map.
size_ = kUsingFullMapSentinel;
functor_(&map_);
// Insert elements into it.
for (size_t i = 0; i < kArraySize; ++i) {
map_.insert(std::move(temp.array[i]));
temp.array[i].~value_type();
}
}
// Helpers for constructors and destructors.
void InitFrom(const small_map& src) {
functor_ = src.functor_;
size_ = src.size_;
if (src.UsingFullMap()) {
functor_(&map_);
map_ = src.map_;
} else {
for (size_t i = 0; i < size_; ++i) {
new (&array_[i]) value_type(src.array_[i]);
}
}
}
void Destroy() {
if (UsingFullMap()) {
map_.~NormalMap();
} else {
for (size_t i = 0; i < size_; ++i) {
array_[i].~value_type();
}
}
}
};
template <typename NormalMap,
size_t kArraySize,
typename EqualKey,
typename Functor>
inline bool small_map<NormalMap, kArraySize, EqualKey, Functor>::iterator::
operator==(const const_iterator& other) const {
return other == *this;
}
template <typename NormalMap,
size_t kArraySize,
typename EqualKey,
typename Functor>
inline bool small_map<NormalMap, kArraySize, EqualKey, Functor>::iterator::
operator!=(const const_iterator& other) const {
return other != *this;
}
} // namespace base
#endif // BASE_CONTAINERS_SMALL_MAP_H_

View file

@ -0,0 +1,474 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_SPAN_H_
#define BASE_CONTAINERS_SPAN_H_
#include <stddef.h>
#include <algorithm>
#include <array>
#include <iterator>
#include <limits>
#include <type_traits>
#include <utility>
#include "base/containers/checked_iterators.h"
#include "base/logging.h"
#include "base/macros.h"
#include "base/stl_util.h"
#include "base/template_util.h"
namespace base {
// [views.constants]
constexpr size_t dynamic_extent = std::numeric_limits<size_t>::max();
template <typename T, size_t Extent = dynamic_extent>
class span;
namespace internal {
template <size_t I>
using size_constant = std::integral_constant<size_t, I>;
template <typename T>
struct ExtentImpl : size_constant<dynamic_extent> {};
template <typename T, size_t N>
struct ExtentImpl<T[N]> : size_constant<N> {};
template <typename T, size_t N>
struct ExtentImpl<std::array<T, N>> : size_constant<N> {};
template <typename T, size_t N>
struct ExtentImpl<base::span<T, N>> : size_constant<N> {};
template <typename T>
using Extent = ExtentImpl<std::remove_cv_t<std::remove_reference_t<T>>>;
template <typename T>
struct IsSpanImpl : std::false_type {};
template <typename T, size_t Extent>
struct IsSpanImpl<span<T, Extent>> : std::true_type {};
template <typename T>
using IsNotSpan = negation<IsSpanImpl<std::decay_t<T>>>;
template <typename T>
struct IsStdArrayImpl : std::false_type {};
template <typename T, size_t N>
struct IsStdArrayImpl<std::array<T, N>> : std::true_type {};
template <typename T>
using IsNotStdArray = negation<IsStdArrayImpl<std::decay_t<T>>>;
template <typename T>
using IsNotCArray = negation<std::is_array<std::remove_reference_t<T>>>;
template <typename From, typename To>
using IsLegalDataConversion = std::is_convertible<From (*)[], To (*)[]>;
template <typename Container, typename T>
using ContainerHasConvertibleData = IsLegalDataConversion<
std::remove_pointer_t<decltype(base::data(std::declval<Container>()))>,
T>;
template <typename Container>
using ContainerHasIntegralSize =
std::is_integral<decltype(base::size(std::declval<Container>()))>;
template <typename From, size_t FromExtent, typename To, size_t ToExtent>
using EnableIfLegalSpanConversion =
std::enable_if_t<(ToExtent == dynamic_extent || ToExtent == FromExtent) &&
IsLegalDataConversion<From, To>::value>;
// SFINAE check if Array can be converted to a span<T>.
template <typename Array, typename T, size_t Extent>
using EnableIfSpanCompatibleArray =
std::enable_if_t<(Extent == dynamic_extent ||
Extent == internal::Extent<Array>::value) &&
ContainerHasConvertibleData<Array, T>::value>;
// SFINAE check if Container can be converted to a span<T>.
template <typename Container, typename T>
using IsSpanCompatibleContainer =
conjunction<IsNotSpan<Container>,
IsNotStdArray<Container>,
IsNotCArray<Container>,
ContainerHasConvertibleData<Container, T>,
ContainerHasIntegralSize<Container>>;
template <typename Container, typename T>
using EnableIfSpanCompatibleContainer =
std::enable_if_t<IsSpanCompatibleContainer<Container, T>::value>;
template <typename Container, typename T, size_t Extent>
using EnableIfSpanCompatibleContainerAndSpanIsDynamic =
std::enable_if_t<IsSpanCompatibleContainer<Container, T>::value &&
Extent == dynamic_extent>;
// A helper template for storing the size of a span. Spans with static extents
// don't require additional storage, since the extent itself is specified in the
// template parameter.
template <size_t Extent>
class ExtentStorage {
public:
constexpr explicit ExtentStorage(size_t size) noexcept {}
constexpr size_t size() const noexcept { return Extent; }
};
// Specialization of ExtentStorage for dynamic extents, which do require
// explicit storage for the size.
template <>
struct ExtentStorage<dynamic_extent> {
constexpr explicit ExtentStorage(size_t size) noexcept : size_(size) {}
constexpr size_t size() const noexcept { return size_; }
private:
size_t size_;
};
} // namespace internal
// A span is a value type that represents an array of elements of type T. Since
// it only consists of a pointer to memory with an associated size, it is very
// light-weight. It is cheap to construct, copy, move and use spans, so that
// users are encouraged to use it as a pass-by-value parameter. A span does not
// own the underlying memory, so care must be taken to ensure that a span does
// not outlive the backing store.
//
// span is somewhat analogous to StringPiece, but with arbitrary element types,
// allowing mutation if T is non-const.
//
// span is implicitly convertible from C++ arrays, as well as most [1]
// container-like types that provide a data() and size() method (such as
// std::vector<T>). A mutable span<T> can also be implicitly converted to an
// immutable span<const T>.
//
// Consider using a span for functions that take a data pointer and size
// parameter: it allows the function to still act on an array-like type, while
// allowing the caller code to be a bit more concise.
//
// For read-only data access pass a span<const T>: the caller can supply either
// a span<const T> or a span<T>, while the callee will have a read-only view.
// For read-write access a mutable span<T> is required.
//
// Without span:
// Read-Only:
// // std::string HexEncode(const uint8_t* data, size_t size);
// std::vector<uint8_t> data_buffer = GenerateData();
// std::string r = HexEncode(data_buffer.data(), data_buffer.size());
//
// Mutable:
// // ssize_t SafeSNPrintf(char* buf, size_t N, const char* fmt, Args...);
// char str_buffer[100];
// SafeSNPrintf(str_buffer, sizeof(str_buffer), "Pi ~= %lf", 3.14);
//
// With span:
// Read-Only:
// // std::string HexEncode(base::span<const uint8_t> data);
// std::vector<uint8_t> data_buffer = GenerateData();
// std::string r = HexEncode(data_buffer);
//
// Mutable:
// // ssize_t SafeSNPrintf(base::span<char>, const char* fmt, Args...);
// char str_buffer[100];
// SafeSNPrintf(str_buffer, "Pi ~= %lf", 3.14);
//
// Spans with "const" and pointers
// -------------------------------
//
// Const and pointers can get confusing. Here are vectors of pointers and their
// corresponding spans:
//
// const std::vector<int*> => base::span<int* const>
// std::vector<const int*> => base::span<const int*>
// const std::vector<const int*> => base::span<const int* const>
//
// Differences from the C++20 draft
// --------------------------------
//
// http://eel.is/c++draft/views contains the latest C++20 draft of std::span.
// Chromium tries to follow the draft as close as possible. Differences between
// the draft and the implementation are documented in subsections below.
//
// Differences from [span.objectrep]:
// - as_bytes() and as_writable_bytes() return spans of uint8_t instead of
// std::byte (std::byte is a C++17 feature)
//
// Differences from [span.cons]:
// - Constructing a static span (i.e. Extent != dynamic_extent) from a dynamic
// sized container (e.g. std::vector) requires an explicit conversion (in the
// C++20 draft this is simply UB)
//
// Differences from [span.obs]:
// - empty() is marked with WARN_UNUSED_RESULT instead of [[nodiscard]]
// ([[nodiscard]] is a C++17 feature)
//
// Furthermore, all constructors and methods are marked noexcept due to the lack
// of exceptions in Chromium.
//
// Due to the lack of class template argument deduction guides in C++14
// appropriate make_span() utility functions are provided.
// [span], class template span
template <typename T, size_t Extent>
class span : public internal::ExtentStorage<Extent> {
private:
using ExtentStorage = internal::ExtentStorage<Extent>;
public:
using element_type = T;
using value_type = std::remove_cv_t<T>;
using size_type = size_t;
using difference_type = ptrdiff_t;
using pointer = T*;
using reference = T&;
using iterator = CheckedContiguousIterator<T>;
// TODO(https://crbug.com/828324): Drop the const_iterator typedef once gMock
// supports containers without this nested type.
using const_iterator = iterator;
using reverse_iterator = std::reverse_iterator<iterator>;
static constexpr size_t extent = Extent;
// [span.cons], span constructors, copy, assignment, and destructor
constexpr span() noexcept : ExtentStorage(0), data_(nullptr) {
static_assert(Extent == dynamic_extent || Extent == 0, "Invalid Extent");
}
constexpr span(T* data, size_t size) noexcept
: ExtentStorage(size), data_(data) {
CHECK(Extent == dynamic_extent || Extent == size);
}
// Artificially templatized to break ambiguity for span(ptr, 0).
template <typename = void>
constexpr span(T* begin, T* end) noexcept : span(begin, end - begin) {
// Note: CHECK_LE is not constexpr, hence regular CHECK must be used.
CHECK(begin <= end);
}
template <
size_t N,
typename = internal::EnableIfSpanCompatibleArray<T (&)[N], T, Extent>>
constexpr span(T (&array)[N]) noexcept : span(base::data(array), N) {}
template <
typename U,
size_t N,
typename =
internal::EnableIfSpanCompatibleArray<std::array<U, N>&, T, Extent>>
constexpr span(std::array<U, N>& array) noexcept
: span(base::data(array), N) {}
template <typename U,
size_t N,
typename = internal::
EnableIfSpanCompatibleArray<const std::array<U, N>&, T, Extent>>
constexpr span(const std::array<U, N>& array) noexcept
: span(base::data(array), N) {}
// Conversion from a container that has compatible base::data() and integral
// base::size().
template <
typename Container,
typename =
internal::EnableIfSpanCompatibleContainerAndSpanIsDynamic<Container&,
T,
Extent>>
constexpr span(Container& container) noexcept
: span(base::data(container), base::size(container)) {}
template <
typename Container,
typename = internal::EnableIfSpanCompatibleContainerAndSpanIsDynamic<
const Container&,
T,
Extent>>
constexpr span(const Container& container) noexcept
: span(base::data(container), base::size(container)) {}
constexpr span(const span& other) noexcept = default;
// Conversions from spans of compatible types and extents: this allows a
// span<T> to be seamlessly used as a span<const T>, but not the other way
// around. If extent is not dynamic, OtherExtent has to be equal to Extent.
template <
typename U,
size_t OtherExtent,
typename =
internal::EnableIfLegalSpanConversion<U, OtherExtent, T, Extent>>
constexpr span(const span<U, OtherExtent>& other)
: span(other.data(), other.size()) {}
constexpr span& operator=(const span& other) noexcept = default;
~span() noexcept = default;
// [span.sub], span subviews
template <size_t Count>
constexpr span<T, Count> first() const noexcept {
static_assert(Count <= Extent, "Count must not exceed Extent");
CHECK(Extent != dynamic_extent || Count <= size());
return {data(), Count};
}
template <size_t Count>
constexpr span<T, Count> last() const noexcept {
static_assert(Count <= Extent, "Count must not exceed Extent");
CHECK(Extent != dynamic_extent || Count <= size());
return {data() + (size() - Count), Count};
}
template <size_t Offset, size_t Count = dynamic_extent>
constexpr span<T,
(Count != dynamic_extent
? Count
: (Extent != dynamic_extent ? Extent - Offset
: dynamic_extent))>
subspan() const noexcept {
static_assert(Offset <= Extent, "Offset must not exceed Extent");
static_assert(Count == dynamic_extent || Count <= Extent - Offset,
"Count must not exceed Extent - Offset");
CHECK(Extent != dynamic_extent || Offset <= size());
CHECK(Extent != dynamic_extent || Count == dynamic_extent ||
Count <= size() - Offset);
return {data() + Offset, Count != dynamic_extent ? Count : size() - Offset};
}
constexpr span<T, dynamic_extent> first(size_t count) const noexcept {
// Note: CHECK_LE is not constexpr, hence regular CHECK must be used.
CHECK(count <= size());
return {data(), count};
}
constexpr span<T, dynamic_extent> last(size_t count) const noexcept {
// Note: CHECK_LE is not constexpr, hence regular CHECK must be used.
CHECK(count <= size());
return {data() + (size() - count), count};
}
constexpr span<T, dynamic_extent> subspan(size_t offset,
size_t count = dynamic_extent) const
noexcept {
// Note: CHECK_LE is not constexpr, hence regular CHECK must be used.
CHECK(offset <= size());
CHECK(count == dynamic_extent || count <= size() - offset);
return {data() + offset, count != dynamic_extent ? count : size() - offset};
}
// [span.obs], span observers
constexpr size_t size() const noexcept { return ExtentStorage::size(); }
constexpr size_t size_bytes() const noexcept { return size() * sizeof(T); }
constexpr bool empty() const noexcept WARN_UNUSED_RESULT {
return size() == 0;
}
// [span.elem], span element access
constexpr T& operator[](size_t idx) const noexcept {
// Note: CHECK_LT is not constexpr, hence regular CHECK must be used.
CHECK(idx < size());
return *(data() + idx);
}
constexpr T& front() const noexcept {
static_assert(Extent == dynamic_extent || Extent > 0,
"Extent must not be 0");
CHECK(Extent != dynamic_extent || !empty());
return *data();
}
constexpr T& back() const noexcept {
static_assert(Extent == dynamic_extent || Extent > 0,
"Extent must not be 0");
CHECK(Extent != dynamic_extent || !empty());
return *(data() + size() - 1);
}
constexpr T* data() const noexcept { return data_; }
// [span.iter], span iterator support
constexpr iterator begin() const noexcept {
return iterator(data_, data_ + size());
}
constexpr iterator end() const noexcept {
return iterator(data_, data_ + size(), data_ + size());
}
constexpr reverse_iterator rbegin() const noexcept {
return reverse_iterator(end());
}
constexpr reverse_iterator rend() const noexcept {
return reverse_iterator(begin());
}
private:
T* data_;
};
// span<T, Extent>::extent can not be declared inline prior to C++17, hence this
// definition is required.
template <class T, size_t Extent>
constexpr size_t span<T, Extent>::extent;
// [span.objectrep], views of object representation
template <typename T, size_t X>
span<const uint8_t, (X == dynamic_extent ? dynamic_extent : sizeof(T) * X)>
as_bytes(span<T, X> s) noexcept {
return {reinterpret_cast<const uint8_t*>(s.data()), s.size_bytes()};
}
template <typename T,
size_t X,
typename = std::enable_if_t<!std::is_const<T>::value>>
span<uint8_t, (X == dynamic_extent ? dynamic_extent : sizeof(T) * X)>
as_writable_bytes(span<T, X> s) noexcept {
return {reinterpret_cast<uint8_t*>(s.data()), s.size_bytes()};
}
// Type-deducing helpers for constructing a span.
template <int&... ExplicitArgumentBarrier, typename T>
constexpr span<T> make_span(T* data, size_t size) noexcept {
return {data, size};
}
template <int&... ExplicitArgumentBarrier, typename T>
constexpr span<T> make_span(T* begin, T* end) noexcept {
return {begin, end};
}
// make_span utility function that deduces both the span's value_type and extent
// from the passed in argument.
//
// Usage: auto span = base::make_span(...);
template <int&... ExplicitArgumentBarrier, typename Container>
constexpr auto make_span(Container&& container) noexcept {
using T =
std::remove_pointer_t<decltype(base::data(std::declval<Container>()))>;
using Extent = internal::Extent<Container>;
return span<T, Extent::value>(std::forward<Container>(container));
}
// make_span utility function that allows callers to explicit specify the span's
// extent, the value_type is deduced automatically. This is useful when passing
// a dynamically sized container to a method expecting static spans, when the
// container is known to have the correct size.
//
// Note: This will CHECK that N indeed matches size(container).
//
// Usage: auto static_span = base::make_span<N>(...);
template <size_t N, int&... ExplicitArgumentBarrier, typename Container>
constexpr auto make_span(Container&& container) noexcept {
using T =
std::remove_pointer_t<decltype(base::data(std::declval<Container>()))>;
return span<T, N>(base::data(container), base::size(container));
}
} // namespace base
#endif // BASE_CONTAINERS_SPAN_H_

View file

@ -0,0 +1,23 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_STACK_H_
#define BASE_CONTAINERS_STACK_H_
#include <stack>
#include "base/containers/circular_deque.h"
namespace base {
// Provides a definition of base::stack that's like std::stack but uses a
// base::circular_deque instead of std::deque. Since std::stack is just a
// wrapper for an underlying type, we can just provide a typedef for it that
// defaults to the base circular_deque.
template <class T, class Container = circular_deque<T>>
using stack = std::stack<T, Container>;
} // namespace base
#endif // BASE_CONTAINERS_STACK_H_

View file

@ -0,0 +1,254 @@
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_STACK_CONTAINER_H_
#define BASE_CONTAINERS_STACK_CONTAINER_H_
#include <stddef.h>
#include <vector>
#include "base/macros.h"
#include "build/build_config.h"
namespace base {
// This allocator can be used with STL containers to provide a stack buffer
// from which to allocate memory and overflows onto the heap. This stack buffer
// would be allocated on the stack and allows us to avoid heap operations in
// some situations.
//
// STL likes to make copies of allocators, so the allocator itself can't hold
// the data. Instead, we make the creator responsible for creating a
// StackAllocator::Source which contains the data. Copying the allocator
// merely copies the pointer to this shared source, so all allocators created
// based on our allocator will share the same stack buffer.
//
// This stack buffer implementation is very simple. The first allocation that
// fits in the stack buffer will use the stack buffer. Any subsequent
// allocations will not use the stack buffer, even if there is unused room.
// This makes it appropriate for array-like containers, but the caller should
// be sure to reserve() in the container up to the stack buffer size. Otherwise
// the container will allocate a small array which will "use up" the stack
// buffer.
template<typename T, size_t stack_capacity>
class StackAllocator : public std::allocator<T> {
public:
typedef typename std::allocator<T>::pointer pointer;
typedef typename std::allocator<T>::size_type size_type;
// Backing store for the allocator. The container owner is responsible for
// maintaining this for as long as any containers using this allocator are
// live.
struct Source {
Source() : used_stack_buffer_(false) {
}
// Casts the buffer in its right type.
T* stack_buffer() { return reinterpret_cast<T*>(stack_buffer_); }
const T* stack_buffer() const {
return reinterpret_cast<const T*>(&stack_buffer_);
}
// The buffer itself. It is not of type T because we don't want the
// constructors and destructors to be automatically called. Define a POD
// buffer of the right size instead.
alignas(T) char stack_buffer_[sizeof(T[stack_capacity])];
#if defined(__GNUC__) && !defined(ARCH_CPU_X86_FAMILY)
static_assert(alignof(T) <= 16, "http://crbug.com/115612");
#endif
// Set when the stack buffer is used for an allocation. We do not track
// how much of the buffer is used, only that somebody is using it.
bool used_stack_buffer_;
};
// Used by containers when they want to refer to an allocator of type U.
template<typename U>
struct rebind {
typedef StackAllocator<U, stack_capacity> other;
};
// For the straight up copy c-tor, we can share storage.
StackAllocator(const StackAllocator<T, stack_capacity>& rhs)
: std::allocator<T>(), source_(rhs.source_) {
}
// ISO C++ requires the following constructor to be defined,
// and std::vector in VC++2008SP1 Release fails with an error
// in the class _Container_base_aux_alloc_real (from <xutility>)
// if the constructor does not exist.
// For this constructor, we cannot share storage; there's
// no guarantee that the Source buffer of Ts is large enough
// for Us.
// TODO: If we were fancy pants, perhaps we could share storage
// iff sizeof(T) == sizeof(U).
template<typename U, size_t other_capacity>
StackAllocator(const StackAllocator<U, other_capacity>& other)
: source_(NULL) {
}
// This constructor must exist. It creates a default allocator that doesn't
// actually have a stack buffer. glibc's std::string() will compare the
// current allocator against the default-constructed allocator, so this
// should be fast.
StackAllocator() : source_(NULL) {
}
explicit StackAllocator(Source* source) : source_(source) {
}
// Actually do the allocation. Use the stack buffer if nobody has used it yet
// and the size requested fits. Otherwise, fall through to the standard
// allocator.
pointer allocate(size_type n) {
if (source_ && !source_->used_stack_buffer_ && n <= stack_capacity) {
source_->used_stack_buffer_ = true;
return source_->stack_buffer();
} else {
return std::allocator<T>::allocate(n);
}
}
// Free: when trying to free the stack buffer, just mark it as free. For
// non-stack-buffer pointers, just fall though to the standard allocator.
void deallocate(pointer p, size_type n) {
if (source_ && p == source_->stack_buffer())
source_->used_stack_buffer_ = false;
else
std::allocator<T>::deallocate(p, n);
}
private:
Source* source_;
};
// A wrapper around STL containers that maintains a stack-sized buffer that the
// initial capacity of the vector is based on. Growing the container beyond the
// stack capacity will transparently overflow onto the heap. The container must
// support reserve().
//
// This will not work with std::string since some implementations allocate
// more bytes than requested in calls to reserve(), forcing the allocation onto
// the heap. http://crbug.com/709273
//
// WATCH OUT: the ContainerType MUST use the proper StackAllocator for this
// type. This object is really intended to be used only internally. You'll want
// to use the wrappers below for different types.
template<typename TContainerType, int stack_capacity>
class StackContainer {
public:
typedef TContainerType ContainerType;
typedef typename ContainerType::value_type ContainedType;
typedef StackAllocator<ContainedType, stack_capacity> Allocator;
// Allocator must be constructed before the container!
StackContainer() : allocator_(&stack_data_), container_(allocator_) {
// Make the container use the stack allocation by reserving our buffer size
// before doing anything else.
container_.reserve(stack_capacity);
}
// Getters for the actual container.
//
// Danger: any copies of this made using the copy constructor must have
// shorter lifetimes than the source. The copy will share the same allocator
// and therefore the same stack buffer as the original. Use std::copy to
// copy into a "real" container for longer-lived objects.
ContainerType& container() { return container_; }
const ContainerType& container() const { return container_; }
// Support operator-> to get to the container. This allows nicer syntax like:
// StackContainer<...> foo;
// std::sort(foo->begin(), foo->end());
ContainerType* operator->() { return &container_; }
const ContainerType* operator->() const { return &container_; }
#ifdef UNIT_TEST
// Retrieves the stack source so that that unit tests can verify that the
// buffer is being used properly.
const typename Allocator::Source& stack_data() const {
return stack_data_;
}
#endif
protected:
typename Allocator::Source stack_data_;
Allocator allocator_;
ContainerType container_;
private:
DISALLOW_COPY_AND_ASSIGN(StackContainer);
};
// Range-based iteration support for StackContainer.
template <typename TContainerType, int stack_capacity>
auto begin(
const StackContainer<TContainerType, stack_capacity>& stack_container)
-> decltype(begin(stack_container.container())) {
return begin(stack_container.container());
}
template <typename TContainerType, int stack_capacity>
auto begin(StackContainer<TContainerType, stack_capacity>& stack_container)
-> decltype(begin(stack_container.container())) {
return begin(stack_container.container());
}
template <typename TContainerType, int stack_capacity>
auto end(StackContainer<TContainerType, stack_capacity>& stack_container)
-> decltype(end(stack_container.container())) {
return end(stack_container.container());
}
template <typename TContainerType, int stack_capacity>
auto end(const StackContainer<TContainerType, stack_capacity>& stack_container)
-> decltype(end(stack_container.container())) {
return end(stack_container.container());
}
// StackVector -----------------------------------------------------------------
// Example:
// StackVector<int, 16> foo;
// foo->push_back(22); // we have overloaded operator->
// foo[0] = 10; // as well as operator[]
template<typename T, size_t stack_capacity>
class StackVector : public StackContainer<
std::vector<T, StackAllocator<T, stack_capacity> >,
stack_capacity> {
public:
StackVector() : StackContainer<
std::vector<T, StackAllocator<T, stack_capacity> >,
stack_capacity>() {
}
// We need to put this in STL containers sometimes, which requires a copy
// constructor. We can't call the regular copy constructor because that will
// take the stack buffer from the original. Here, we create an empty object
// and make a stack buffer of its own.
StackVector(const StackVector<T, stack_capacity>& other)
: StackContainer<
std::vector<T, StackAllocator<T, stack_capacity> >,
stack_capacity>() {
this->container().assign(other->begin(), other->end());
}
StackVector<T, stack_capacity>& operator=(
const StackVector<T, stack_capacity>& other) {
this->container().assign(other->begin(), other->end());
return *this;
}
// Vectors are commonly indexed, which isn't very convenient even with
// operator-> (using "->at()" does exception stuff we don't want).
T& operator[](size_t i) { return this->container().operator[](i); }
const T& operator[](size_t i) const {
return this->container().operator[](i);
}
};
} // namespace base
#endif // BASE_CONTAINERS_STACK_CONTAINER_H_

View file

@ -0,0 +1,78 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_UNIQUE_PTR_ADAPTERS_H_
#define BASE_CONTAINERS_UNIQUE_PTR_ADAPTERS_H_
#include <memory>
namespace base {
// This transparent comparator allows to lookup by raw pointer in
// a container of unique pointers. This functionality is based on C++14
// extensions to std::set/std::map interface, and can also be used
// with base::flat_set/base::flat_map.
//
// Example usage:
// Foo* foo = ...
// std::set<std::unique_ptr<Foo>, base::UniquePtrComparator> set;
// set.insert(std::unique_ptr<Foo>(foo));
// ...
// auto it = set.find(foo);
// EXPECT_EQ(foo, it->get());
//
// You can find more information about transparent comparisons here:
// http://en.cppreference.com/w/cpp/utility/functional/less_void
struct UniquePtrComparator {
using is_transparent = int;
template <typename T, class Deleter = std::default_delete<T>>
bool operator()(const std::unique_ptr<T, Deleter>& lhs,
const std::unique_ptr<T, Deleter>& rhs) const {
return lhs < rhs;
}
template <typename T, class Deleter = std::default_delete<T>>
bool operator()(const T* lhs, const std::unique_ptr<T, Deleter>& rhs) const {
return lhs < rhs.get();
}
template <typename T, class Deleter = std::default_delete<T>>
bool operator()(const std::unique_ptr<T, Deleter>& lhs, const T* rhs) const {
return lhs.get() < rhs;
}
};
// UniquePtrMatcher is useful for finding an element in a container of
// unique_ptrs when you have the raw pointer.
//
// Example usage:
// std::vector<std::unique_ptr<Foo>> vector;
// Foo* element = ...
// auto iter = std::find_if(vector.begin(), vector.end(),
// MatchesUniquePtr(element));
//
// Example of erasing from container:
// EraseIf(v, MatchesUniquePtr(element));
//
template <class T, class Deleter = std::default_delete<T>>
struct UniquePtrMatcher {
explicit UniquePtrMatcher(T* t) : t_(t) {}
bool operator()(const std::unique_ptr<T, Deleter>& o) {
return o.get() == t_;
}
private:
T* const t_;
};
template <class T, class Deleter = std::default_delete<T>>
UniquePtrMatcher<T, Deleter> MatchesUniquePtr(T* t) {
return UniquePtrMatcher<T, Deleter>(t);
}
} // namespace base
#endif // BASE_CONTAINERS_UNIQUE_PTR_ADAPTERS_H_

View file

@ -0,0 +1,21 @@
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_UTIL_H_
#define BASE_CONTAINERS_UTIL_H_
#include <stdint.h>
namespace base {
// TODO(crbug.com/817982): What we really need is for checked_math.h to be
// able to do checked arithmetic on pointers.
template <typename T>
static inline uintptr_t get_uintptr(const T* t) {
return reinterpret_cast<uintptr_t>(t);
}
} // namespace base
#endif // BASE_CONTAINERS_UTIL_H_

View file

@ -0,0 +1,188 @@
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_CONTAINERS_VECTOR_BUFFERS_H_
#define BASE_CONTAINERS_VECTOR_BUFFERS_H_
#include <stdlib.h>
#include <string.h>
#include <type_traits>
#include <utility>
#include "base/containers/util.h"
#include "base/logging.h"
#include "base/macros.h"
#include "base/numerics/checked_math.h"
namespace base {
namespace internal {
// Internal implementation detail of base/containers.
//
// Implements a vector-like buffer that holds a certain capacity of T. Unlike
// std::vector, VectorBuffer never constructs or destructs its arguments, and
// can't change sizes. But it does implement templates to assist in efficient
// moving and destruction of those items manually.
//
// In particular, the destructor function does not iterate over the items if
// there is no destructor. Moves should be implemented as a memcpy/memmove for
// trivially copyable objects (POD) otherwise, it should be a std::move if
// possible, and as a last resort it falls back to a copy. This behavior is
// similar to std::vector.
//
// No special consideration is done for noexcept move constructors since
// we compile without exceptions.
//
// The current API does not support moving overlapping ranges.
template <typename T>
class VectorBuffer {
public:
constexpr VectorBuffer() = default;
#if defined(__clang__) && !defined(__native_client__)
// This constructor converts an uninitialized void* to a T* which triggers
// clang Control Flow Integrity. Since this is as-designed, disable.
__attribute__((no_sanitize("cfi-unrelated-cast", "vptr")))
#endif
VectorBuffer(size_t count)
: buffer_(reinterpret_cast<T*>(
malloc(CheckMul(sizeof(T), count).ValueOrDie()))),
capacity_(count) {
}
VectorBuffer(VectorBuffer&& other) noexcept
: buffer_(other.buffer_), capacity_(other.capacity_) {
other.buffer_ = nullptr;
other.capacity_ = 0;
}
~VectorBuffer() { free(buffer_); }
VectorBuffer& operator=(VectorBuffer&& other) {
free(buffer_);
buffer_ = other.buffer_;
capacity_ = other.capacity_;
other.buffer_ = nullptr;
other.capacity_ = 0;
return *this;
}
size_t capacity() const { return capacity_; }
T& operator[](size_t i) {
// TODO(crbug.com/817982): Some call sites (at least circular_deque.h) are
// calling this with `i == capacity_` as a way of getting `end()`. Therefore
// we have to allow this for now (`i <= capacity_`), until we fix those call
// sites to use real iterators. This comment applies here and to `const T&
// operator[]`, below.
CHECK_LE(i, capacity_);
return buffer_[i];
}
const T& operator[](size_t i) const {
CHECK_LE(i, capacity_);
return buffer_[i];
}
T* begin() { return buffer_; }
T* end() { return &buffer_[capacity_]; }
// DestructRange ------------------------------------------------------------
// Trivially destructible objects need not have their destructors called.
template <typename T2 = T,
typename std::enable_if<std::is_trivially_destructible<T2>::value,
int>::type = 0>
void DestructRange(T* begin, T* end) {}
// Non-trivially destructible objects must have their destructors called
// individually.
template <typename T2 = T,
typename std::enable_if<!std::is_trivially_destructible<T2>::value,
int>::type = 0>
void DestructRange(T* begin, T* end) {
CHECK_LE(begin, end);
while (begin != end) {
begin->~T();
begin++;
}
}
// MoveRange ----------------------------------------------------------------
//
// The destructor will be called (as necessary) for all moved types. The
// ranges must not overlap.
//
// The parameters and begin and end (one past the last) of the input buffer,
// and the address of the first element to copy to. There must be sufficient
// room in the destination for all items in the range [begin, end).
// Trivially copyable types can use memcpy. trivially copyable implies
// that there is a trivial destructor as we don't have to call it.
template <typename T2 = T,
typename std::enable_if<base::is_trivially_copyable<T2>::value,
int>::type = 0>
static void MoveRange(T* from_begin, T* from_end, T* to) {
CHECK(!RangesOverlap(from_begin, from_end, to));
memcpy(
to, from_begin,
CheckSub(get_uintptr(from_end), get_uintptr(from_begin)).ValueOrDie());
}
// Not trivially copyable, but movable: call the move constructor and
// destruct the original.
template <typename T2 = T,
typename std::enable_if<std::is_move_constructible<T2>::value &&
!base::is_trivially_copyable<T2>::value,
int>::type = 0>
static void MoveRange(T* from_begin, T* from_end, T* to) {
CHECK(!RangesOverlap(from_begin, from_end, to));
while (from_begin != from_end) {
new (to) T(std::move(*from_begin));
from_begin->~T();
from_begin++;
to++;
}
}
// Not movable, not trivially copyable: call the copy constructor and
// destruct the original.
template <typename T2 = T,
typename std::enable_if<!std::is_move_constructible<T2>::value &&
!base::is_trivially_copyable<T2>::value,
int>::type = 0>
static void MoveRange(T* from_begin, T* from_end, T* to) {
CHECK(!RangesOverlap(from_begin, from_end, to));
while (from_begin != from_end) {
new (to) T(*from_begin);
from_begin->~T();
from_begin++;
to++;
}
}
private:
static bool RangesOverlap(const T* from_begin,
const T* from_end,
const T* to) {
const auto from_begin_uintptr = get_uintptr(from_begin);
const auto from_end_uintptr = get_uintptr(from_end);
const auto to_uintptr = get_uintptr(to);
return !(
to >= from_end ||
CheckAdd(to_uintptr, CheckSub(from_end_uintptr, from_begin_uintptr))
.ValueOrDie() <= from_begin_uintptr);
}
T* buffer_ = nullptr;
size_t capacity_ = 0;
DISALLOW_COPY_AND_ASSIGN(VectorBuffer);
};
} // namespace internal
} // namespace base
#endif // BASE_CONTAINERS_VECTOR_BUFFERS_H_