DiscoverCppCon 2017 Sessions (Audio)
CppCon 2017 Sessions (Audio)
Claim Ownership

CppCon 2017 Sessions (Audio)

Author:

Subscribed: 26Played: 421
Share

Description

Sessions for CppCon 2017
136 Episodes
Reverse
The feature set for the C++17 release is set, and the release of the standard is just around the corner. In this session, we'll discuss all the new C++ features in C++17 and how they'll change the way we write C++ software. We'll explore the new standard in breath, not width, covering a cornucopia of core language and library features and fixes: Language Changes (part 1): Structured bindings Selection statements with initializers Compile-time conditional statments Fold expressions Class template deduction auto non-type template parameters inline variables constexpr lambdas Unary static_assert Guaranteed copy elision Nested namespace definitions Preprocessor predicate for header testing Library Changes (part 2): string_view optional variant any Parallel algorithms Filesystem support Polymorphic allocators and memory resources Aligned new Improved insertion and splicing for associative containers Math special functions Variable templates for metafunctions Boolean logic metafunctions
The feature set for the C++17 release is set, and the release of the standard is just around the corner. In this session, we'll discuss all the new C++ features in C++17 and how they'll change the way we write C++ software. We'll explore the new standard in breath, not width, covering a cornucopia of core language and library features and fixes: Language Changes (part 1): Structured bindings Selection statements with initializers Compile-time conditional statments Fold expressions Class template deduction auto non-type template parameters inline variables constexpr lambdas Unary static_assert Guaranteed copy elision Nested namespace definitions Preprocessor predicate for header testing Library Changes (part 2): string_view optional variant any Parallel algorithms Filesystem support Polymorphic allocators and memory resources Aligned new Improved insertion and splicing for associative containers Math special functions Variable templates for metafunctions Boolean logic metafunctions
C++ solves the problem of runtime polymorphism in a very specific way. It does so through inheritance, by having all classes that will be used polymorphically inherit from the same base class, and then using a table of function pointers (the virtual table) to perform dynamic dispatch when a method is called. Polymorphic objects are then accessed through pointers to their base class, which encourages storing objects on the heap and accessing them via pointers. This is both inconvenient and inefficient when compared to traditional value semantics. As Sean Parent said: Inheritance is the base class of evil. It turns out that this is only one of many possible designs, each of which has different tradeoffs and characteristics. This talk will explore the design space for runtime polymorphism in C++, and in particular will introduce a policy-based approach to solving the problem. We will see how this approach enables runtime polymorphism with stack-allocated storage, heap-allocated storage, shared storage, no storage at all (reference semantics), and more. We will also see how we can get fine-grained control over the dispatch mechanism to beat the performance of classic virtual tables in some cases. The examples will be based on a real implementation in the Dyno library [1], but the principles are independent from the library. At the end of the talk, the audience will walk out with a clear understanding of the different ways of implementing runtime polymorphism, their tradeoffs, and with guidelines on when to use one implementation or another. [1]: https://github.com/ldionne/dyno
The proposed range concepts for the standard library are a significant improvement but are designed for the mental model of iterating and mapping values, not hierarchical domain decomposition. Even for a seemingly trivial array there are countless ways to partition and store its elements in distributed memory, and algorithms are required to behave and scale identically for all of them. It also does not help that most applications operate on multidimensional data structures where efficient access to neighborhood regions is crucial. Among HPC developers, it is therefore widely accepted that canonical iteration space and physical memory layout must be specified as separate concepts. For this, we use views based on multidimensional index sets, inspired by the proposed range concepts. In this session, we will explain the challenges when distributing container elements for thousands of cores and how modern C++ allows to achieve portable efficiency. As an HPC afficionado, you know you want this: copy( matrix_a | local() | block({ 2,3 }), matrix_b | block({ 4,5 }) ) If this does not look familiar to you: we give a gentle introduction to High Performance Computing along the way.
Undefined behavior (UB) is one of the features of C++ that is both loved and hated. Every C++ developer cares about performance, which is why it is very important to understand what the compiler can optimize and what are the language guarantees. Many times programmers are too optimistic about what the compiler can optimize, or they waste time optimizing code by hand. In this talk you will learn: - what is the “as-if” rule - why compilers know less than the programmer — the main problem with Translation Units - why compilers optimize based on UB, but don't warn about it - why Undefined Behavior can transcend time, removing your whole code without running 88mph - why having a more constrained language is better — optimizations that you can’t do in C
Technical debt is the bane of most established libraries, whether it is standard library or boost or local library developed in house. Paying this debt is expensive and in many cases seems infeasible. As a result of several (justified at the time) decisions Google accumulated serious technical debt in how we use std::string. This became a blocking issue in our effort to open source Google’s common libraries. To fix this we needed to break libstdc++ std::string ABI. This is the story of how we survived it kept Google still running.
The most significant improvement in C++17 will be Parallel Algorithms in the STL. But it is meant only for CPUs, as C++ does not define heterogeneous devices yet (though SG14 is working on that). How would you like to learn how to run Parallel STL algorithms on both CPU and GPU? Parallel STL is an implementation of the Technical Specification for C++ Extensions for Parallelism for both CPU and GPU with SYCL Heterogeneous C++ language. This technical specification describes a set of requirements for implementations of an interface that C++ programs may use to invoke algorithms with parallel execution. In practice, this specification allows users to specify execution policies to traditional STL algorithms which will enable the execution of those algorithms in parallel. The various policies can specify different kinds of parallel execution. For example, std::vector<int> v = ... // Traditional sequential sort: std::sort(vec.begin(), vec.end()); // Explicit sequential sort: std::sort(seq, vec.begin(), vec.end()); // Explicit parallel sort if possible: std::sort(par, vec.begin(), vec.end()); // Explicit parallel and vectorized sort if possible: std::sort(par_unseq, vec.begin(), vec.end()); So how does a Technical Specification become a Standard? As it turns out, in this case, not without harrowing twists and turns worthy of an Agatha Christie novel. This talk will also be the story behind the C++17 standardization process of the Parallelism TS and why we made so many changes. While it started life as a Technical Specification (TS), did you know all the changes we made to it before we added it to C++17 and why? For example, we changed the names of the execution policies, removed exception handling support, disabled dynamic execution, unified some of the numeric algorithm names, allowed copying arguments to function objects given to parallel algorithms, and addressed complexity and iterator concerns as we lived through it as a member of SG1 and the editor of several TSes. The implementation is available here: https://github.com/KhronosGroup/SyclParallelSTL/blob/master/README.md
Meta

Meta

2018-01-0901:00:44

For the past several years, I have been researching new languages to support safe and efficient network protocol processing, specifically for software-defined networking applications. The unfortunate outcome of that research is this conclusion: any language for that domain must also be a general purpose programming language. This is not an easy thing to do. Many of the language features I worked with simply generated expressions to compute packet and header lengths, read and write packet fields, and encode and decode entire packets. If we could do this in C++, I might not need an entirely new language. Over the past year, Herb Sutter and I have collaborated to work on language support for compile-time programming, static reflection, metaclasses, and code generation in the C++ programming language. These facilities completely eliminate the need for the external tools, metacompilers, and domain-specific languages on which we frequently rely to generate high-performance encoders and decoders in C++. In this talk, I will discuss how to use these evolving proposals to create facilities for encoding and decoding packets. In particular, I will discuss the background requirements of my work, the overall design of a network protocol library, and the reflection and generation facilities that implement the library.
Ever wonder how the linker turns your compiled C++ code into an executable file? Why the One Definition Rule exists? Or why your debug builds are so large? In this talk we'll take a deep dive and follow the story of our three adventurers, ELF, MachO, and COFF as they make their way out of Objectville carrying C++ translation units on their backs as they venture to become executables. We'll see as they make their way through the tangled forests of name mangling, climb the cliffs of thread local storage, and wade through the bogs of debug info. We'll see how they mostly follow the same path, but each approach the journey in their own way. We'll also see that becoming an executable is not quite the end of their journey, as the dynamic linker awaits to bring them to yet a higher plane of existence as complete C++ programs running on a machine.
C++ modules-ts[1] proposes a module system, with defined interfaces, implementations and importing. I shall outline the new semantics, their impact on the ABI and build systems, and discuss the, in-progress, implementation in the GNU C++ Compiler. [1] JTC1/SC22/WG21/n4681, 'Working Draft, Extensions to C++ for Modules', 2017-07-14, Gabriel Dos Reis http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4681.pdf
This talk highlights efforts by Kenny Kerr, Herb Sutter, Gabriel Dos Reis, and others to make Windows a great place for C++ developers, replacing proprietary extensions and tools with standard C++ code. The talk begins with a quick introduction on what C++/WinRT is, how it helps make it super simple to create projects targeting Windows that use just ordinary C++ code, how the project uses advanced C++17 and TS features today, the roadmap for the next few months, and then how we’re looking ahead at things like modules and metaclasses to dramatically improve the way we think of and use C++ on Windows.
Textures and images are everywhere in today's world. Compressing those images well improves streaming performance, video, dramatically reduces the size and download times of apps. It can give you more realistic virtual reality experiences, more apps that can run on a small mobile device, a better digital map of our world, it can reduce our impact on climate change through storage savings, and so much more. In the past few years, because of the rapid development in GPUs and innovations in algorithms, the way we approach texture compression has changed. Binomial is developing Basis, a supercompressed texture solution that will also provide an open file format standard in the graphics industry through The Khronos Group that's free for anyone to target. Basis is also written entirely in C++. Binomial co-founders Rich Geldreich and Stephanie Hurlburt will present their latest work in Basis and give insights into how texture compression will continue to evolve.
Almost all of the the standard containers have “customization points”, ways that users can modify their behaviors. Three of the common ones are: Allocators Comparison predicates Hash functors In this talk, we’ll explore these customization methods, and then survey the standard containers and container adaptors, and show how you can adapt them to your needs.
Going Nowhere Faster

Going Nowhere Faster

2018-01-0801:00:57

You care about the performance of your C++ code. You have followed basic patterns to make your C++ code efficient. You profiled your application or server and used the appropriate algorithms to minimize how much work is done and the appropriate data structures to make it fast. You even have reliable benchmarks to cover the most critical and important parts of the system for performance. But you're profiling the benchmark and need to squeeze even more performance out of it... What next? This talk dives into the performance and optimization concerns of the important, performance critical loops in your program. How do modern CPUs execute these loops, and what influences their performance? What can you do to make them faster? How can you leverage the C++ compiler to do this while keeping the code maintainable and clean? What optimization techniques do modern compilers make available to you? We'll cover all of this and more, with piles of code, examples, and even live demo. While the talk will focus somewhat on x86 processors and the LLVM compiler, but everything will be broadly applicable and basic mappings for other processors and toolchains will be discussed throughout. However, be prepared for a lot of C++ code and assembly.
I've spent the last few years watching Facebook's C++ codebase grow by several orders of magnitude. Despite constantly improving abstractions, constantly improving tooling, frequent internal courses, and ongoing internal discussion, there are bug-patterns we simply cannot stop from being reintroduced into our code. My hope is to show some of the most common (and infamous) bugs in our history, and the surprising complexity that arises in some apparently simple situations. This talk serves the dual purpose of educating the intermediate (and perhaps the occasional advanced) C++ programmer about some really nasty common pitfalls, as well as serves as a plea to experts to help further improve the language, libraries, and best practices to help educate and eradicate some of these problematic patterns.
Free Your Functions!

Free Your Functions!

2018-01-0801:01:41

You are devoted to minimize coupling and duplication? You are taking care to maximize cohesion, flexibility, extensibility, encapsulation, testability, and even performance in order to achieve the high goals of (object-oriented) programming? Awesome! But wait: You still favor member functions? Seriously? You have been deceived! You have been praying at the altar of false promises! Shed the shackles of Java philosophy! Free your functions! In this talk I will demonstrate why in C++ free functions should generally be preferred to member functions, and why free functions — not member functions! — provide you with all the aforementioned advantages you expect from object-oriented programming. Note, though, that this talk might fundamentally change your perception of C++ and object-oriented programming in general!
If you were to ask a C++ developer the question "what is execution?" you may get a different answer depending on who you asked. This is because execution means something different to the various users of C++; in areas such as multi-core parallelism, heterogeneity, distributed systems and networking. There are many commonalities that can be drawn between these different use cases, however, each too has their own distinct requirements. Now imagine if C++ could bring together all of these and form a single unified interface for execution, one which would allow a distinct separation of computations from their method of execution. This is the challenge which a C++ committee subgroup has undertaken. A recent joint effort by a group of interested parties within the C++ committee has been working on a solution which will bring together the requirements of all of these use cases into a single unified interface for execution. This unified interface will provide a generalised way of describing execution that will serve as an abstraction underneath common C++ control structures such as async, task blocks and parallel STL, and above a wide range of resources capable of execution. This talk takes a subjective look at the story so far; the original papers that paved the way to where we are now, the underlying design philosophy that will come to represent execution in C++, and the current state of the proposal in progress. It will also present the various use cases that influenced the proposal, how their requirements helped shape the design and what challenges are still to be overcome.
Exceptions are often described as 'slow', and the standard advice is to use them only in exceptional circumstances. In this talk, we'll find out how slow exceptions really are by exploring the Itanium exception handling model. We'll dive into several implementations (libunwind, gcc, llvm-libunwind), and learn about everything that happens between throw() and catch(). We will discover the answers to questions such as why throwing an exception takes a global lock (and how to avoid it), how caching can speed up the performance of exceptions, and how to get better stack traces.
Allocators: The Good Parts

Allocators: The Good Parts

2018-01-0801:00:48

Memory allocators have a bad rap. Sure, they give us control, sometimes vital control, over how and where memory is allocated, but they seem so hard to use correctly. The allocator model that was first standardized in C++98 was put in place to solve a different problem; despite being called "allocators," control over memory allocation was, at best, a secondary consideration. Changes in C++11 and C++17 corrected many of the flaws, at the cost of complexity in the specification. If only there were a user manual and tutorial for allocators, much of that complexity would fall away and could be ignored. This talk strives to be that user manual and tutorial, intended to focus your attention on the important parts of modern allocators, and leaving most of the legacy stuff from 1998 behind. We will look at the easiest way to design a class that uses allocators, and walk through the creation of a real, useful allocator. In the process, I will introduce features in C++17 that can easily be adapted for use with today's C++11 and C++14 standard libraries. My goal is to make allocators approachable, so that you can use them appropriately in your own work.
Developing consistent and meaningful benchmark results for code is a complex task. Measurement tools exist (Intel® VTune™ Amplifier, SmartBear AQTime, Valgrind, etc.) external to applications, but they are sometimes expensive for small teams or cumbersome to utilize. Celero is a small library which can be added to a C++ project and perform benchmarks on code in a way which is easy to reproduce, share, and compare among individual runs, developers, or projects. This talk will start with an overview of baseline benchmarking, how proper measurements are made, and offer guidelines for performance optimization. It will then walk developers through the process of developing benchmark code in a way similar to many unit testing libraries. Through practical examples, methods for benchmark design and debugging will be explored. We will then use the library to plot and understand the results. In the end, attendees should feel comfortable exploring the use of Celero in their own projects and adding baseline benchmarking to their testing and delivery processes.
loading
Comments