Stop using RawGit URL in PyPI
1 |
# General theory of operation:
|
|
2 |
#
|
|
3 |
# We implement an API that closely mirrors the stdlib ssl module's blocking
|
|
4 |
# API, and we do it using the stdlib ssl module's non-blocking in-memory API.
|
|
5 |
# The stdlib non-blocking in-memory API is barely documented, and acts as a
|
|
6 |
# thin wrapper around openssl, whose documentation also leaves something to be
|
|
7 |
# desired. So here's the main things you need to know to understand the code
|
|
8 |
# in this file:
|
|
9 |
#
|
|
10 |
# We use an ssl.SSLObject, which exposes the four main I/O operations:
|
|
11 |
#
|
|
12 |
# - do_handshake: performs the initial handshake. Must be called once at the
|
|
13 |
# beginning of each connection; is a no-op once it's completed once.
|
|
14 |
#
|
|
15 |
# - write: takes some unencrypted data and attempts to send it to the remote
|
|
16 |
# peer.
|
|
17 |
|
|
18 |
# - read: attempts to decrypt and return some data from the remote peer.
|
|
19 |
#
|
|
20 |
# - unwrap: this is weirdly named; maybe it helps to realize that the thing it
|
|
21 |
# wraps is called SSL_shutdown. It sends a cryptographically signed message
|
|
22 |
# saying "I'm closing this connection now", and then waits to receive the
|
|
23 |
# same from the remote peer (unless we already received one, in which case
|
|
24 |
# it returns immediately).
|
|
25 |
#
|
|
26 |
# All of these operations read and write from some in-memory buffers called
|
|
27 |
# "BIOs", which are an opaque OpenSSL-specific object that's basically
|
|
28 |
# semantically equivalent to a Python bytearray. When they want to send some
|
|
29 |
# bytes to the remote peer, they append them to the outgoing BIO, and when
|
|
30 |
# they want to receive some bytes from the remote peer, they try to pull them
|
|
31 |
# out of the incoming BIO. "Sending" always succeeds, because the outgoing BIO
|
|
32 |
# can always be extended to hold more data. "Receiving" acts sort of like a
|
|
33 |
# non-blocking socket: it might manage to get some data immediately, or it
|
|
34 |
# might fail and need to be tried again later. We can also directly add or
|
|
35 |
# remove data from the BIOs whenever we want.
|
|
36 |
#
|
|
37 |
# Now the problem is that while these I/O operations are opaque atomic
|
|
38 |
# operations from the point of view of us calling them, under the hood they
|
|
39 |
# might require some arbitrary sequence of sends and receives from the remote
|
|
40 |
# peer. This is particularly true for do_handshake, which generally requires a
|
|
41 |
# few round trips, but it's also true for write and read, due to an evil thing
|
|
42 |
# called "renegotiation".
|
|
43 |
#
|
|
44 |
# Renegotiation is the process by which one of the peers might arbitrarily
|
|
45 |
# decide to redo the handshake at any time. Did I mention it's evil? It's
|
|
46 |
# pretty evil, and almost universally hated. The HTTP/2 spec forbids the use
|
|
47 |
# of TLS renegotiation for HTTP/2 connections. TLS 1.3 removes it from the
|
|
48 |
# protocol entirely. It's impossible to trigger a renegotiation if using
|
|
49 |
# Python's ssl module. OpenSSL's renegotiation support is pretty buggy [1].
|
|
50 |
# Nonetheless, it does get used in real life, mostly in two cases:
|
|
51 |
#
|
|
52 |
# 1) Normally in TLS 1.2 and below, when the client side of a connection wants
|
|
53 |
# to present a certificate to prove their identity, that certificate gets sent
|
|
54 |
# in plaintext. This is bad, because it means that anyone eavesdropping can
|
|
55 |
# see who's connecting – it's like sending your username in plain text. Not as
|
|
56 |
# bad as sending your password in plain text, but still, pretty bad. However,
|
|
57 |
# renegotiations *are* encrypted. So as a workaround, it's not uncommon for
|
|
58 |
# systems that want to use client certificates to first do an anonymous
|
|
59 |
# handshake, and then to turn around and do a second handshake (=
|
|
60 |
# renegotiation) and this time ask for a client cert. Or sometimes this is
|
|
61 |
# done on a case-by-case basis, e.g. a web server might accept a connection,
|
|
62 |
# read the request, and then once it sees the page you're asking for it might
|
|
63 |
# stop and ask you for a certificate.
|
|
64 |
#
|
|
65 |
# 2) In principle the same TLS connection can be used for an arbitrarily long
|
|
66 |
# time, and might transmit arbitrarily large amounts of data. But this creates
|
|
67 |
# a cryptographic problem: an attacker who has access to arbitrarily large
|
|
68 |
# amounts of data that's all encrypted using the same key may eventually be
|
|
69 |
# able to use this to figure out the key. Is this a real practical problem? I
|
|
70 |
# have no idea, I'm not a cryptographer. In any case, some people worry that
|
|
71 |
# it's a problem, so their TLS libraries are designed to automatically trigger
|
|
72 |
# a renegotiation every once in a while on some sort of timer.
|
|
73 |
#
|
|
74 |
# The end result is that you might be going along, minding your own business,
|
|
75 |
# and then *bam*! a wild renegotiation appears! And you just have to cope.
|
|
76 |
#
|
|
77 |
# The reason that coping with renegotiations is difficult is that some
|
|
78 |
# unassuming "read" or "write" call might find itself unable to progress until
|
|
79 |
# it does a handshake, which remember is a process with multiple round
|
|
80 |
# trips. So read might have to send data, and write might have to receive
|
|
81 |
# data, and this might happen multiple times. And some of those attempts might
|
|
82 |
# fail because there isn't any data yet, and need to be retried. Managing all
|
|
83 |
# this is pretty complicated.
|
|
84 |
#
|
|
85 |
# Here's how openssl (and thus the stdlib ssl module) handle this. All of the
|
|
86 |
# I/O operations above follow the same rules. When you call one of them:
|
|
87 |
#
|
|
88 |
# - it might write some data to the outgoing BIO
|
|
89 |
# - it might read some data from the incoming BIO
|
|
90 |
# - it might raise SSLWantReadError if it can't complete without reading more
|
|
91 |
# data from the incoming BIO. This is important: the "read" in ReadError
|
|
92 |
# refers to reading from the *underlying* stream.
|
|
93 |
# - (and in principle it might raise SSLWantWriteError too, but that never
|
|
94 |
# happens when using memory BIOs, so never mind)
|
|
95 |
#
|
|
96 |
# If it doesn't raise an error, then the operation completed successfully
|
|
97 |
# (though we still need to take any outgoing data out of the memory buffer and
|
|
98 |
# put it onto the wire). If it *does* raise an error, then we need to retry
|
|
99 |
# *exactly that method call* later – in particular, if a 'write' failed, we
|
|
100 |
# need to try again later *with the same data*, because openssl might have
|
|
101 |
# already committed some of the initial parts of our data to its output even
|
|
102 |
# though it didn't tell us that, and has remembered that the next time we call
|
|
103 |
# write it needs to skip the first 1024 bytes or whatever it is. (Well,
|
|
104 |
# technically, we're actually allowed to call 'write' again with a data buffer
|
|
105 |
# which is the same as our old one PLUS some extra stuff added onto the end,
|
|
106 |
# but in Trio that never comes up so never mind.)
|
|
107 |
#
|
|
108 |
# There are some people online who claim that once you've gotten a Want*Error
|
|
109 |
# then the *very next call* you make to openssl *must* be the same as the
|
|
110 |
# previous one. I'm pretty sure those people are wrong. In particular, it's
|
|
111 |
# okay to call write, get a WantReadError, and then call read a few times;
|
|
112 |
# it's just that *the next time you call write*, it has to be with the same
|
|
113 |
# data.
|
|
114 |
#
|
|
115 |
# One final wrinkle: we want our SSLStream to support full-duplex operation,
|
|
116 |
# i.e. it should be possible for one task to be calling send_all while another
|
|
117 |
# task is calling receive_some. But renegotiation makes this a big hassle, because
|
|
118 |
# even if SSLStream's restricts themselves to one task calling send_all and one
|
|
119 |
# task calling receive_some, those two tasks might end up both wanting to call
|
|
120 |
# send_all, or both to call receive_some at the same time *on the underlying
|
|
121 |
# stream*. So we have to do some careful locking to hide this problem from our
|
|
122 |
# users.
|
|
123 |
#
|
|
124 |
# (Renegotiation is evil.)
|
|
125 |
#
|
|
126 |
# So our basic strategy is to define a single helper method called "_retry",
|
|
127 |
# which has generic logic for dealing with SSLWantReadError, pushing data from
|
|
128 |
# the outgoing BIO to the wire, reading data from the wire to the incoming
|
|
129 |
# BIO, retrying an I/O call until it works, and synchronizing with other tasks
|
|
130 |
# that might be calling _retry concurrently. Basically it takes an SSLObject
|
|
131 |
# non-blocking in-memory method and converts it into a Trio async blocking
|
|
132 |
# method. _retry is only about 30 lines of code, but all these cases
|
|
133 |
# multiplied by concurrent calls make it extremely tricky, so there are lots
|
|
134 |
# of comments down below on the details, and a really extensive test suite in
|
|
135 |
# test_ssl.py. And now you know *why* it's so tricky, and can probably
|
|
136 |
# understand how it works.
|
|
137 |
#
|
|
138 |
# [1] https://rt.openssl.org/Ticket/Display.html?id=3712
|
|
139 |
|
|
140 |
# XX how closely should we match the stdlib API?
|
|
141 |
# - maybe suppress_ragged_eofs=False is a better default?
|
|
142 |
# - maybe check crypto folks for advice?
|
|
143 |
# - this is also interesting: https://bugs.python.org/issue8108#msg102867
|
|
144 |
|
|
145 |
# Definitely keep an eye on Cory's TLS API ideas on security-sig etc.
|
|
146 |
|
|
147 |
# XX document behavior on cancellation/error (i.e.: all is lost abandon
|
|
148 |
# stream)
|
|
149 |
# docs will need to make very clear that this is different from all the other
|
|
150 |
# cancellations in core Trio
|
|
151 |
|
|
152 | 22 |
import operator as _operator |
153 | 22 |
import ssl as _stdlib_ssl |
154 | 22 |
from enum import Enum as _Enum |
155 |
|
|
156 | 22 |
import trio |
157 |
|
|
158 | 22 |
from .abc import Stream, Listener |
159 | 22 |
from ._highlevel_generic import aclose_forcefully |
160 | 22 |
from . import _sync |
161 | 22 |
from ._util import ConflictDetector, SubclassingDeprecatedIn_v0_15_0 |
162 | 22 |
from ._deprecate import warn_deprecated |
163 |
|
|
164 |
################################################################
|
|
165 |
# SSLStream
|
|
166 |
################################################################
|
|
167 |
|
|
168 |
# Ideally, when the user calls SSLStream.receive_some() with no argument, then
|
|
169 |
# we should do exactly one call to self.transport_stream.receive_some(),
|
|
170 |
# decrypt everything we got, and return it. Unfortunately, the way openssl's
|
|
171 |
# API works, we have to pick how much data we want to allow when we call
|
|
172 |
# read(), and then it (potentially) triggers a call to
|
|
173 |
# transport_stream.receive_some(). So at the time we pick the amount of data
|
|
174 |
# to decrypt, we don't know how much data we've read. As a simple heuristic,
|
|
175 |
# we record the max amount of data returned by previous calls to
|
|
176 |
# transport_stream.receive_some(), and we use that for future calls to read().
|
|
177 |
# But what do we use for the very first call? That's what this constant sets.
|
|
178 |
#
|
|
179 |
# Note that the value passed to read() is a limit on the amount of
|
|
180 |
# *decrypted* data, but we can only see the size of the *encrypted* data
|
|
181 |
# returned by transport_stream.receive_some(). TLS adds a small amount of
|
|
182 |
# framing overhead, and TLS compression is rarely used these days because it's
|
|
183 |
# insecure. So the size of the encrypted data should be a slight over-estimate
|
|
184 |
# of the size of the decrypted data, which is exactly what we want.
|
|
185 |
#
|
|
186 |
# The specific value is not really based on anything; it might be worth tuning
|
|
187 |
# at some point. But, if you have an TCP connection with the typical 1500 byte
|
|
188 |
# MTU and an initial window of 10 (see RFC 6928), then the initial burst of
|
|
189 |
# data will be limited to ~15000 bytes (or a bit less due to IP-level framing
|
|
190 |
# overhead), so this is chosen to be larger than that.
|
|
191 | 22 |
STARTING_RECEIVE_SIZE = 16384 |
192 |
|
|
193 |
|
|
194 | 22 |
class NeedHandshakeError(Exception): |
195 |
"""Some :class:`SSLStream` methods can't return any meaningful data until
|
|
196 |
after the handshake. If you call them before the handshake, they raise
|
|
197 |
this error.
|
|
198 |
|
|
199 |
"""
|
|
200 |
|
|
201 |
|
|
202 | 22 |
class _Once: |
203 | 22 |
def __init__(self, afn, *args): |
204 | 22 |
self._afn = afn |
205 | 22 |
self._args = args |
206 | 22 |
self.started = False |
207 | 22 |
self._done = _sync.Event() |
208 |
|
|
209 | 22 |
async def ensure(self, *, checkpoint): |
210 |
if not self.started: |
|
211 | 22 |
self.started = True |
212 | 22 |
await self._afn(*self._args) |
213 | 22 |
self._done.set() |
214 |
elif not checkpoint and self._done.is_set(): |
|
215 | 22 |
return
|
216 |
else: |
|
217 | 22 |
await self._done.wait() |
218 |
|
|
219 | 22 |
@property
|
220 | 9 |
def done(self): |
221 | 22 |
return self._done.is_set() |
222 |
|
|
223 |
|
|
224 | 22 |
_State = _Enum("_State", ["OK", "BROKEN", "CLOSED"]) |
225 |
|
|
226 |
|
|
227 | 22 |
class SSLStream(Stream, metaclass=SubclassingDeprecatedIn_v0_15_0): |
228 |
r"""Encrypted communication using SSL/TLS. |
|
229 |
|
|
230 |
:class:`SSLStream` wraps an arbitrary :class:`~trio.abc.Stream`, and
|
|
231 |
allows you to perform encrypted communication over it using the usual
|
|
232 |
:class:`~trio.abc.Stream` interface. You pass regular data to
|
|
233 |
:meth:`send_all`, then it encrypts it and sends the encrypted data on the
|
|
234 |
underlying :class:`~trio.abc.Stream`; :meth:`receive_some` takes encrypted
|
|
235 |
data out of the underlying :class:`~trio.abc.Stream` and decrypts it
|
|
236 |
before returning it.
|
|
237 |
|
|
238 |
You should read the standard library's :mod:`ssl` documentation carefully
|
|
239 |
before attempting to use this class, and probably other general
|
|
240 |
documentation on SSL/TLS as well. SSL/TLS is subtle and quick to
|
|
241 |
anger. Really. I'm not kidding.
|
|
242 |
|
|
243 |
Args:
|
|
244 |
transport_stream (~trio.abc.Stream): The stream used to transport
|
|
245 |
encrypted data. Required.
|
|
246 |
|
|
247 |
ssl_context (~ssl.SSLContext): The :class:`~ssl.SSLContext` used for
|
|
248 |
this connection. Required. Usually created by calling
|
|
249 |
:func:`ssl.create_default_context`.
|
|
250 |
|
|
251 |
server_hostname (str or None): The name of the server being connected
|
|
252 |
to. Used for `SNI
|
|
253 |
<https://en.wikipedia.org/wiki/Server_Name_Indication>`__ and for
|
|
254 |
validating the server's certificate (if hostname checking is
|
|
255 |
enabled). This is effectively mandatory for clients, and actually
|
|
256 |
mandatory if ``ssl_context.check_hostname`` is ``True``.
|
|
257 |
|
|
258 |
server_side (bool): Whether this stream is acting as a client or
|
|
259 |
server. Defaults to False, i.e. client mode.
|
|
260 |
|
|
261 |
https_compatible (bool): There are two versions of SSL/TLS commonly
|
|
262 |
encountered in the wild: the standard version, and the version used
|
|
263 |
for HTTPS (HTTP-over-SSL/TLS).
|
|
264 |
|
|
265 |
Standard-compliant SSL/TLS implementations always send a
|
|
266 |
cryptographically signed ``close_notify`` message before closing the
|
|
267 |
connection. This is important because if the underlying transport
|
|
268 |
were simply closed, then there wouldn't be any way for the other
|
|
269 |
side to know whether the connection was intentionally closed by the
|
|
270 |
peer that they negotiated a cryptographic connection to, or by some
|
|
271 |
`man-in-the-middle
|
|
272 |
<https://en.wikipedia.org/wiki/Man-in-the-middle_attack>`__ attacker
|
|
273 |
who can't manipulate the cryptographic stream, but can manipulate
|
|
274 |
the transport layer (a so-called "truncation attack").
|
|
275 |
|
|
276 |
However, this part of the standard is widely ignored by real-world
|
|
277 |
HTTPS implementations, which means that if you want to interoperate
|
|
278 |
with them, then you NEED to ignore it too.
|
|
279 |
|
|
280 |
Fortunately this isn't as bad as it sounds, because the HTTP
|
|
281 |
protocol already includes its own equivalent of ``close_notify``, so
|
|
282 |
doing this again at the SSL/TLS level is redundant. But not all
|
|
283 |
protocols do! Therefore, by default Trio implements the safer
|
|
284 |
standard-compliant version (``https_compatible=False``). But if
|
|
285 |
you're speaking HTTPS or some other protocol where
|
|
286 |
``close_notify``\s are commonly skipped, then you should set
|
|
287 |
``https_compatible=True``; with this setting, Trio will neither
|
|
288 |
expect nor send ``close_notify`` messages.
|
|
289 |
|
|
290 |
If you have code that was written to use :class:`ssl.SSLSocket` and
|
|
291 |
now you're porting it to Trio, then it may be useful to know that a
|
|
292 |
difference between :class:`SSLStream` and :class:`ssl.SSLSocket` is
|
|
293 |
that :class:`~ssl.SSLSocket` implements the
|
|
294 |
``https_compatible=True`` behavior by default.
|
|
295 |
|
|
296 |
Attributes:
|
|
297 |
transport_stream (trio.abc.Stream): The underlying transport stream
|
|
298 |
that was passed to ``__init__``. An example of when this would be
|
|
299 |
useful is if you're using :class:`SSLStream` over a
|
|
300 |
:class:`~trio.SocketStream` and want to call the
|
|
301 |
:class:`~trio.SocketStream`'s :meth:`~trio.SocketStream.setsockopt`
|
|
302 |
method.
|
|
303 |
|
|
304 |
Internally, this class is implemented using an instance of
|
|
305 |
:class:`ssl.SSLObject`, and all of :class:`~ssl.SSLObject`'s methods and
|
|
306 |
attributes are re-exported as methods and attributes on this class.
|
|
307 |
However, there is one difference: :class:`~ssl.SSLObject` has several
|
|
308 |
methods that return information about the encrypted connection, like
|
|
309 |
:meth:`~ssl.SSLSocket.cipher` or
|
|
310 |
:meth:`~ssl.SSLSocket.selected_alpn_protocol`. If you call them before the
|
|
311 |
handshake, when they can't possibly return useful data, then
|
|
312 |
:class:`ssl.SSLObject` returns None, but :class:`trio.SSLStream`
|
|
313 |
raises :exc:`NeedHandshakeError`.
|
|
314 |
|
|
315 |
This also means that if you register a SNI callback using
|
|
316 |
`~ssl.SSLContext.sni_callback`, then the first argument your callback
|
|
317 |
receives will be a :class:`ssl.SSLObject`.
|
|
318 |
|
|
319 |
"""
|
|
320 |
|
|
321 |
# Note: any new arguments here should likely also be added to
|
|
322 |
# SSLListener.__init__, and maybe the open_ssl_over_tcp_* helpers.
|
|
323 | 22 |
def __init__( |
324 |
self, |
|
325 |
transport_stream, |
|
326 |
ssl_context, |
|
327 |
*, |
|
328 |
server_hostname=None, |
|
329 |
server_side=False, |
|
330 |
https_compatible=False, |
|
331 |
max_refill_bytes="unused and deprecated", |
|
332 |
):
|
|
333 | 22 |
self.transport_stream = transport_stream |
334 | 22 |
self._state = _State.OK |
335 |
if max_refill_bytes != "unused and deprecated": |
|
336 | 22 |
warn_deprecated("max_refill_bytes=...", "0.12.0", issue=959, instead=None) |
337 | 22 |
self._https_compatible = https_compatible |
338 | 22 |
self._outgoing = _stdlib_ssl.MemoryBIO() |
339 | 22 |
self._delayed_outgoing = None |
340 | 22 |
self._incoming = _stdlib_ssl.MemoryBIO() |
341 | 22 |
self._ssl_object = ssl_context.wrap_bio( |
342 |
self._incoming, |
|
343 |
self._outgoing, |
|
344 |
server_side=server_side, |
|
345 |
server_hostname=server_hostname, |
|
346 |
)
|
|
347 |
# Tracks whether we've already done the initial handshake
|
|
348 | 22 |
self._handshook = _Once(self._do_handshake) |
349 |
|
|
350 |
# These are used to synchronize access to self.transport_stream
|
|
351 | 22 |
self._inner_send_lock = _sync.StrictFIFOLock() |
352 | 22 |
self._inner_recv_count = 0 |
353 | 22 |
self._inner_recv_lock = _sync.Lock() |
354 |
|
|
355 |
# These are used to make sure that our caller doesn't attempt to make
|
|
356 |
# multiple concurrent calls to send_all/wait_send_all_might_not_block
|
|
357 |
# or to receive_some.
|
|
358 | 22 |
self._outer_send_conflict_detector = ConflictDetector( |
359 |
"another task is currently sending data on this SSLStream"
|
|
360 |
)
|
|
361 | 22 |
self._outer_recv_conflict_detector = ConflictDetector( |
362 |
"another task is currently receiving data on this SSLStream"
|
|
363 |
)
|
|
364 |
|
|
365 | 22 |
self._estimated_receive_size = STARTING_RECEIVE_SIZE |
366 |
|
|
367 | 22 |
_forwarded = { |
368 |
"context", |
|
369 |
"server_side", |
|
370 |
"server_hostname", |
|
371 |
"session", |
|
372 |
"session_reused", |
|
373 |
"getpeercert", |
|
374 |
"selected_npn_protocol", |
|
375 |
"cipher", |
|
376 |
"shared_ciphers", |
|
377 |
"compression", |
|
378 |
"pending", |
|
379 |
"get_channel_binding", |
|
380 |
"selected_alpn_protocol", |
|
381 |
"version", |
|
382 |
}
|
|
383 |
|
|
384 | 22 |
_after_handshake = { |
385 |
"session_reused", |
|
386 |
"getpeercert", |
|
387 |
"selected_npn_protocol", |
|
388 |
"cipher", |
|
389 |
"shared_ciphers", |
|
390 |
"compression", |
|
391 |
"get_channel_binding", |
|
392 |
"selected_alpn_protocol", |
|
393 |
"version", |
|
394 |
}
|
|
395 |
|
|
396 | 22 |
def __getattr__(self, name): |
397 |
if name in self._forwarded: |
|
398 |
if name in self._after_handshake and not self._handshook.done: |
|
399 | 22 |
raise NeedHandshakeError( |
400 |
"call do_handshake() before calling {!r}".format(name) |
|
401 |
)
|
|
402 |
|
|
403 | 22 |
return getattr(self._ssl_object, name) |
404 |
else: |
|
405 | 22 |
raise AttributeError(name) |
406 |
|
|
407 | 22 |
def __setattr__(self, name, value): |
408 |
if name in self._forwarded: |
|
409 | 22 |
setattr(self._ssl_object, name, value) |
410 |
else: |
|
411 | 22 |
super().__setattr__(name, value) |
412 |
|
|
413 | 22 |
def __dir__(self): |
414 | 22 |
return super().__dir__() + list(self._forwarded) |
415 |
|
|
416 | 22 |
def _check_status(self): |
417 |
if self._state is _State.OK: |
|
418 | 22 |
return
|
419 |
elif self._state is _State.BROKEN: |
|
420 | 22 |
raise trio.BrokenResourceError |
421 | 22 |
elif self._state is _State.CLOSED: |
422 | 22 |
raise trio.ClosedResourceError |
423 |
else: # pragma: no cover |
|
424 |
assert False |
|
425 |
|
|
426 |
# This is probably the single trickiest function in Trio. It has lots of
|
|
427 |
# comments, though, just make sure to think carefully if you ever have to
|
|
428 |
# touch it. The big comment at the top of this file will help explain
|
|
429 |
# too.
|
|
430 | 22 |
async def _retry(self, fn, *args, ignore_want_read=False, is_handshake=False): |
431 | 22 |
await trio.lowlevel.checkpoint_if_cancelled() |
432 | 22 |
yielded = False |
433 | 22 |
finished = False |
434 |
while not finished: |
|
435 |
# WARNING: this code needs to be very careful with when it
|
|
436 |
# calls 'await'! There might be multiple tasks calling this
|
|
437 |
# function at the same time trying to do different operations,
|
|
438 |
# so we need to be careful to:
|
|
439 |
#
|
|
440 |
# 1) interact with the SSLObject, then
|
|
441 |
# 2) await on exactly one thing that lets us make forward
|
|
442 |
# progress, then
|
|
443 |
# 3) loop or exit
|
|
444 |
#
|
|
445 |
# In particular we don't want to yield while interacting with
|
|
446 |
# the SSLObject (because it's shared state, so someone else
|
|
447 |
# might come in and mess with it while we're suspended), and
|
|
448 |
# we don't want to yield *before* starting the operation that
|
|
449 |
# will help us make progress, because then someone else might
|
|
450 |
# come in and leapfrog us.
|
|
451 |
|
|
452 |
# Call the SSLObject method, and get its result.
|
|
453 |
#
|
|
454 |
# NB: despite what the docs say, SSLWantWriteError can't
|
|
455 |
# happen – "Writes to memory BIOs will always succeed if
|
|
456 |
# memory is available: that is their size can grow
|
|
457 |
# indefinitely."
|
|
458 |
# https://wiki.openssl.org/index.php/Manual:BIO_s_mem(3)
|
|
459 | 22 |
want_read = False |
460 | 22 |
ret = None |
461 | 22 |
try: |
462 | 22 |
ret = fn(*args) |
463 |
except _stdlib_ssl.SSLWantReadError: |
|
464 | 22 |
want_read = True |
465 | 22 |
except (_stdlib_ssl.SSLError, _stdlib_ssl.CertificateError) as exc: |
466 | 22 |
self._state = _State.BROKEN |
467 | 22 |
raise trio.BrokenResourceError from exc |
468 |
else: |
|
469 | 22 |
finished = True |
470 |
if ignore_want_read: |
|
471 | 22 |
want_read = False |
472 | 22 |
finished = True |
473 | 22 |
to_send = self._outgoing.read() |
474 |
|
|
475 |
# Some versions of SSL_do_handshake have a bug in how they handle
|
|
476 |
# the TLS 1.3 handshake on the server side: after the handshake
|
|
477 |
# finishes, they automatically send session tickets, even though
|
|
478 |
# the client may not be expecting data to arrive at this point and
|
|
479 |
# sending it could cause a deadlock or lost data. This applies at
|
|
480 |
# least to OpenSSL 1.1.1c and earlier, and the OpenSSL devs
|
|
481 |
# currently have no plans to fix it:
|
|
482 |
#
|
|
483 |
# https://github.com/openssl/openssl/issues/7948
|
|
484 |
# https://github.com/openssl/openssl/issues/7967
|
|
485 |
#
|
|
486 |
# The correct behavior is to wait to send session tickets on the
|
|
487 |
# first call to SSL_write. (This is what BoringSSL does.) So, we
|
|
488 |
# use a heuristic to detect when OpenSSL has tried to send session
|
|
489 |
# tickets, and we manually delay sending them until the
|
|
490 |
# appropriate moment. For more discussion see:
|
|
491 |
#
|
|
492 |
# https://github.com/python-trio/trio/issues/819#issuecomment-517529763
|
|
493 |
if ( |
|
494 |
is_handshake
|
|
495 |
and not want_read |
|
496 |
and self._ssl_object.server_side |
|
497 |
and self._ssl_object.version() == "TLSv1.3" |
|
498 |
):
|
|
499 | 18 |
assert self._delayed_outgoing is None |
500 | 18 |
self._delayed_outgoing = to_send |
501 | 18 |
to_send = b"" |
502 |
|
|
503 |
# Outputs from the above code block are:
|
|
504 |
#
|
|
505 |
# - to_send: bytestring; if non-empty then we need to send
|
|
506 |
# this data to make forward progress
|
|
507 |
#
|
|
508 |
# - want_read: True if we need to receive_some some data to make
|
|
509 |
# forward progress
|
|
510 |
#
|
|
511 |
# - finished: False means that we need to retry the call to
|
|
512 |
# fn(*args) again, after having pushed things forward. True
|
|
513 |
# means we still need to do whatever was said (in particular
|
|
514 |
# send any data in to_send), but once we do then we're
|
|
515 |
# done.
|
|
516 |
#
|
|
517 |
# - ret: the operation's return value. (Meaningless unless
|
|
518 |
# finished is True.)
|
|
519 |
#
|
|
520 |
# Invariant: want_read and finished can't both be True at the
|
|
521 |
# same time.
|
|
522 |
#
|
|
523 |
# Now we need to move things forward. There are two things we
|
|
524 |
# might have to do, and any given operation might require
|
|
525 |
# either, both, or neither to proceed:
|
|
526 |
#
|
|
527 |
# - send the data in to_send
|
|
528 |
#
|
|
529 |
# - receive_some some data and put it into the incoming BIO
|
|
530 |
#
|
|
531 |
# Our strategy is: if there's data to send, send it;
|
|
532 |
# *otherwise* if there's data to receive_some, receive_some it.
|
|
533 |
#
|
|
534 |
# If both need to happen, then we only send. Why? Well, we
|
|
535 |
# know that *right now* we have to both send and receive_some
|
|
536 |
# before the operation can complete. But as soon as we yield,
|
|
537 |
# that information becomes potentially stale – e.g. while
|
|
538 |
# we're sending, some other task might go and receive_some the
|
|
539 |
# data we need and put it into the incoming BIO. And if it
|
|
540 |
# does, then we *definitely don't* want to do a receive_some –
|
|
541 |
# there might not be any more data coming, and we'd deadlock!
|
|
542 |
# We could do something tricky to keep track of whether a
|
|
543 |
# receive_some happens while we're sending, but the case where
|
|
544 |
# we have to do both is very unusual (only during a
|
|
545 |
# renegotiation), so it's better to keep things simple. So we
|
|
546 |
# do just one potentially-blocking operation, then check again
|
|
547 |
# for fresh information.
|
|
548 |
#
|
|
549 |
# And we prioritize sending over receiving because, if there
|
|
550 |
# are multiple tasks that want to receive_some, then it
|
|
551 |
# doesn't matter what order they go in. But if there are
|
|
552 |
# multiple tasks that want to send, then they each have
|
|
553 |
# different data, and the data needs to get put onto the wire
|
|
554 |
# in the same order that it was retrieved from the outgoing
|
|
555 |
# BIO. So if we have data to send, that *needs* to be the
|
|
556 |
# *very* *next* *thing* we do, to make sure no-one else sneaks
|
|
557 |
# in before us. Or if we can't send immediately because
|
|
558 |
# someone else is, then we at least need to get in line
|
|
559 |
# immediately.
|
|
560 |
if to_send: |
|
561 |
# NOTE: This relies on the lock being strict FIFO fair!
|
|
562 | 22 |
async with self._inner_send_lock: |
563 | 22 |
yielded = True |
564 | 22 |
try: |
565 |
if self._delayed_outgoing is not None: |
|
566 | 18 |
to_send = self._delayed_outgoing + to_send |
567 | 18 |
self._delayed_outgoing = None |
568 | 22 |
await self.transport_stream.send_all(to_send) |
569 | 22 |
except: |
570 |
# Some unknown amount of our data got sent, and we
|
|
571 |
# don't know how much. This stream is doomed.
|
|
572 | 22 |
self._state = _State.BROKEN |
573 | 22 |
raise
|
574 |
elif want_read: |
|
575 |
# It's possible that someone else is already blocked in
|
|
576 |
# transport_stream.receive_some. If so then we want to
|
|
577 |
# wait for them to finish, but we don't want to call
|
|
578 |
# transport_stream.receive_some again ourselves; we just
|
|
579 |
# want to loop around and check if their contribution
|
|
580 |
# helped anything. So we make a note of how many times
|
|
581 |
# some task has been through here before taking the lock,
|
|
582 |
# and if it's changed by the time we get the lock, then we
|
|
583 |
# skip calling transport_stream.receive_some and loop
|
|
584 |
# around immediately.
|
|
585 | 22 |
recv_count = self._inner_recv_count |
586 | 22 |
async with self._inner_recv_lock: |
587 | 22 |
yielded = True |
588 |
if recv_count == self._inner_recv_count: |
|
589 | 22 |
data = await self.transport_stream.receive_some() |
590 |
if not data: |
|
591 | 22 |
self._incoming.write_eof() |
592 |
else: |
|
593 | 22 |
self._estimated_receive_size = max( |
594 |
self._estimated_receive_size, len(data) |
|
595 |
)
|
|
596 | 22 |
self._incoming.write(data) |
597 | 22 |
self._inner_recv_count += 1 |
598 |
if not yielded: |
|
599 | 22 |
await trio.lowlevel.cancel_shielded_checkpoint() |
600 | 22 |
return ret |
601 |
|
|
602 | 22 |
async def _do_handshake(self): |
603 | 22 |
try: |
604 | 22 |
await self._retry(self._ssl_object.do_handshake, is_handshake=True) |
605 | 22 |
except: |
606 | 22 |
self._state = _State.BROKEN |
607 | 22 |
raise
|
608 |
|
|
609 | 22 |
async def do_handshake(self): |
610 |
"""Ensure that the initial handshake has completed.
|
|
611 |
|
|
612 |
The SSL protocol requires an initial handshake to exchange
|
|
613 |
certificates, select cryptographic keys, and so forth, before any
|
|
614 |
actual data can be sent or received. You don't have to call this
|
|
615 |
method; if you don't, then :class:`SSLStream` will automatically
|
|
616 |
peform the handshake as needed, the first time you try to send or
|
|
617 |
receive data. But if you want to trigger it manually – for example,
|
|
618 |
because you want to look at the peer's certificate before you start
|
|
619 |
talking to them – then you can call this method.
|
|
620 |
|
|
621 |
If the initial handshake is already in progress in another task, this
|
|
622 |
waits for it to complete and then returns.
|
|
623 |
|
|
624 |
If the initial handshake has already completed, this returns
|
|
625 |
immediately without doing anything (except executing a checkpoint).
|
|
626 |
|
|
627 |
.. warning:: If this method is cancelled, then it may leave the
|
|
628 |
:class:`SSLStream` in an unusable state. If this happens then any
|
|
629 |
future attempt to use the object will raise
|
|
630 |
:exc:`trio.BrokenResourceError`.
|
|
631 |
|
|
632 |
"""
|
|
633 | 22 |
self._check_status() |
634 | 22 |
await self._handshook.ensure(checkpoint=True) |
635 |
|
|
636 |
# Most things work if we don't explicitly force do_handshake to be called
|
|
637 |
# before calling receive_some or send_all, because openssl will
|
|
638 |
# automatically perform the handshake on the first SSL_{read,write}
|
|
639 |
# call. BUT, allowing openssl to do this will disable Python's hostname
|
|
640 |
# checking!!! See:
|
|
641 |
# https://bugs.python.org/issue30141
|
|
642 |
# So we *definitely* have to make sure that do_handshake is called
|
|
643 |
# before doing anything else.
|
|
644 | 22 |
async def receive_some(self, max_bytes=None): |
645 |
"""Read some data from the underlying transport, decrypt it, and
|
|
646 |
return it.
|
|
647 |
|
|
648 |
See :meth:`trio.abc.ReceiveStream.receive_some` for details.
|
|
649 |
|
|
650 |
.. warning:: If this method is cancelled while the initial handshake
|
|
651 |
or a renegotiation are in progress, then it may leave the
|
|
652 |
:class:`SSLStream` in an unusable state. If this happens then any
|
|
653 |
future attempt to use the object will raise
|
|
654 |
:exc:`trio.BrokenResourceError`.
|
|
655 |
|
|
656 |
"""
|
|
657 | 22 |
with self._outer_recv_conflict_detector: |
658 | 22 |
self._check_status() |
659 | 22 |
try: |
660 | 22 |
await self._handshook.ensure(checkpoint=False) |
661 | 22 |
except trio.BrokenResourceError as exc: |
662 |
# For some reason, EOF before handshake sometimes raises
|
|
663 |
# SSLSyscallError instead of SSLEOFError (e.g. on my linux
|
|
664 |
# laptop, but not on appveyor). Thanks openssl.
|
|
665 |
if self._https_compatible and isinstance( |
|
666 |
exc.__cause__, |
|
667 |
(_stdlib_ssl.SSLEOFError, _stdlib_ssl.SSLSyscallError), |
|
668 |
):
|
|
669 | 22 |
await trio.lowlevel.checkpoint() |
670 | 22 |
return b"" |
671 |
else: |
|
672 | 22 |
raise
|
673 |
if max_bytes is None: |
|
674 |
# If we somehow have more data already in our pending buffer
|
|
675 |
# than the estimate receive size, bump up our size a bit for
|
|
676 |
# this read only.
|
|
677 | 22 |
max_bytes = max(self._estimated_receive_size, self._incoming.pending) |
678 |
else: |
|
679 | 22 |
max_bytes = _operator.index(max_bytes) |
680 |
if max_bytes < 1: |
|
681 | 22 |
raise ValueError("max_bytes must be >= 1") |
682 | 22 |
try: |
683 | 22 |
return await self._retry(self._ssl_object.read, max_bytes) |
684 | 22 |
except trio.BrokenResourceError as exc: |
685 |
# This isn't quite equivalent to just returning b"" in the
|
|
686 |
# first place, because we still end up with self._state set to
|
|
687 |
# BROKEN. But that's actually fine, because after getting an
|
|
688 |
# EOF on TLS then the only thing you can do is close the
|
|
689 |
# stream, and closing doesn't care about the state.
|
|
690 |
if self._https_compatible and isinstance( |
|
691 |
exc.__cause__, _stdlib_ssl.SSLEOFError |
|
692 |
):
|
|
693 | 22 |
await trio.lowlevel.checkpoint() |
694 | 22 |
return b"" |
695 |
else: |
|
696 | 22 |
raise
|
697 |
|
|
698 | 22 |
async def send_all(self, data): |
699 |
"""Encrypt some data and then send it on the underlying transport.
|
|
700 |
|
|
701 |
See :meth:`trio.abc.SendStream.send_all` for details.
|
|
702 |
|
|
703 |
.. warning:: If this method is cancelled, then it may leave the
|
|
704 |
:class:`SSLStream` in an unusable state. If this happens then any
|
|
705 |
attempt to use the object will raise
|
|
706 |
:exc:`trio.BrokenResourceError`.
|
|
707 |
|
|
708 |
"""
|
|
709 | 22 |
with self._outer_send_conflict_detector: |
710 | 22 |
self._check_status() |
711 | 22 |
await self._handshook.ensure(checkpoint=False) |
712 |
# SSLObject interprets write(b"") as an EOF for some reason, which
|
|
713 |
# is not what we want.
|
|
714 |
if not data: |
|
715 | 22 |
await trio.lowlevel.checkpoint() |
716 | 22 |
return
|
717 | 22 |
await self._retry(self._ssl_object.write, data) |
718 |
|
|
719 | 22 |
async def unwrap(self): |
720 |
"""Cleanly close down the SSL/TLS encryption layer, allowing the
|
|
721 |
underlying stream to be used for unencrypted communication.
|
|
722 |
|
|
723 |
You almost certainly don't need this.
|
|
724 |
|
|
725 |
Returns:
|
|
726 |
A pair ``(transport_stream, trailing_bytes)``, where
|
|
727 |
``transport_stream`` is the underlying transport stream, and
|
|
728 |
``trailing_bytes`` is a byte string. Since :class:`SSLStream`
|
|
729 |
doesn't necessarily know where the end of the encrypted data will
|
|
730 |
be, it can happen that it accidentally reads too much from the
|
|
731 |
underlying stream. ``trailing_bytes`` contains this extra data; you
|
|
732 |
should process it as if it was returned from a call to
|
|
733 |
``transport_stream.receive_some(...)``.
|
|
734 |
|
|
735 |
"""
|
|
736 | 22 |
with self._outer_recv_conflict_detector, self._outer_send_conflict_detector: |
737 | 22 |
self._check_status() |
738 | 22 |
await self._handshook.ensure(checkpoint=False) |
739 | 22 |
await self._retry(self._ssl_object.unwrap) |
740 | 22 |
transport_stream = self.transport_stream |
741 | 22 |
self.transport_stream = None |
742 | 22 |
self._state = _State.CLOSED |
743 | 22 |
return (transport_stream, self._incoming.read()) |
744 |
|
|
745 | 22 |
async def aclose(self): |
746 |
"""Gracefully shut down this connection, and close the underlying
|
|
747 |
transport.
|
|
748 |
|
|
749 |
If ``https_compatible`` is False (the default), then this attempts to
|
|
750 |
first send a ``close_notify`` and then close the underlying stream by
|
|
751 |
calling its :meth:`~trio.abc.AsyncResource.aclose` method.
|
|
752 |
|
|
753 |
If ``https_compatible`` is set to True, then this simply closes the
|
|
754 |
underlying stream and marks this stream as closed.
|
|
755 |
|
|
756 |
"""
|
|
757 |
if self._state is _State.CLOSED: |
|
758 | 22 |
await trio.lowlevel.checkpoint() |
759 | 22 |
return
|
760 |
if self._state is _State.BROKEN or self._https_compatible: |
|
761 | 22 |
self._state = _State.CLOSED |
762 | 22 |
await self.transport_stream.aclose() |
763 | 22 |
return
|
764 | 22 |
try: |
765 |
# https_compatible=False, so we're in spec-compliant mode and have
|
|
766 |
# to send close_notify so that the other side gets a cryptographic
|
|
767 |
# assurance that we've called aclose. Of course, we can't do
|
|
768 |
# anything cryptographic until after we've completed the
|
|
769 |
# handshake:
|
|
770 | 22 |
await self._handshook.ensure(checkpoint=False) |
771 |
# Then, we call SSL_shutdown *once*, because we want to send a
|
|
772 |
# close_notify but *not* wait for the other side to send back a
|
|
773 |
# response. In principle it would be more polite to wait for the
|
|
774 |
# other side to reply with their own close_notify. However, if
|
|
775 |
# they aren't paying attention (e.g., if they're just sending
|
|
776 |
# data and not receiving) then we will never notice our
|
|
777 |
# close_notify and we'll be waiting forever. Eventually we'll time
|
|
778 |
# out (hopefully), but it's still kind of nasty. And we can't
|
|
779 |
# require the other side to always be receiving, because (a)
|
|
780 |
# backpressure is kind of important, and (b) I bet there are
|
|
781 |
# broken TLS implementations out there that don't receive all the
|
|
782 |
# time. (Like e.g. anyone using Python ssl in synchronous mode.)
|
|
783 |
#
|
|
784 |
# The send-then-immediately-close behavior is explicitly allowed
|
|
785 |
# by the TLS specs, so we're ok on that.
|
|
786 |
#
|
|
787 |
# Subtlety: SSLObject.unwrap will immediately call it a second
|
|
788 |
# time, and the second time will raise SSLWantReadError because
|
|
789 |
# there hasn't been time for the other side to respond
|
|
790 |
# yet. (Unless they spontaneously sent a close_notify before we
|
|
791 |
# called this, and it's either already been processed or gets
|
|
792 |
# pulled out of the buffer by Python's second call.) So the way to
|
|
793 |
# do what we want is to ignore SSLWantReadError on this call.
|
|
794 |
#
|
|
795 |
# Also, because the other side might have already sent
|
|
796 |
# close_notify and closed their connection then it's possible that
|
|
797 |
# our attempt to send close_notify will raise
|
|
798 |
# BrokenResourceError. This is totally legal, and in fact can happen
|
|
799 |
# with two well-behaved Trio programs talking to each other, so we
|
|
800 |
# don't want to raise an error. So we suppress BrokenResourceError
|
|
801 |
# here. (This is safe, because literally the only thing this call
|
|
802 |
# to _retry will do is send the close_notify alert, so that's
|
|
803 |
# surely where the error comes from.)
|
|
804 |
#
|
|
805 |
# FYI in some cases this could also raise SSLSyscallError which I
|
|
806 |
# think is because SSL_shutdown is terrible. (Check out that note
|
|
807 |
# at the bottom of the man page saying that it sometimes gets
|
|
808 |
# raised spuriously.) I haven't seen this since we switched to
|
|
809 |
# immediately closing the socket, and I don't know exactly what
|
|
810 |
# conditions cause it and how to respond, so for now we're just
|
|
811 |
# letting that happen. But if you start seeing it, then hopefully
|
|
812 |
# this will give you a little head start on tracking it down,
|
|
813 |
# because whoa did this puzzle us at the 2017 PyCon sprints.
|
|
814 |
#
|
|
815 |
# Also, if someone else is blocked in send/receive, then we aren't
|
|
816 |
# going to be able to do a clean shutdown. If that happens, we'll
|
|
817 |
# just do an unclean shutdown.
|
|
818 | 22 |
try: |
819 | 22 |
await self._retry(self._ssl_object.unwrap, ignore_want_read=True) |
820 | 22 |
except (trio.BrokenResourceError, trio.BusyResourceError): |
821 | 22 |
pass
|
822 | 22 |
except: |
823 |
# Failure! Kill the stream and move on.
|
|
824 | 22 |
await aclose_forcefully(self.transport_stream) |
825 | 22 |
raise
|
826 |
else: |
|
827 |
# Success! Gracefully close the underlying stream.
|
|
828 | 22 |
await self.transport_stream.aclose() |
829 |
finally: |
|
830 | 22 |
self._state = _State.CLOSED |
831 |
|
|
832 | 22 |
async def wait_send_all_might_not_block(self): |
833 |
"""See :meth:`trio.abc.SendStream.wait_send_all_might_not_block`."""
|
|
834 |
# This method's implementation is deceptively simple.
|
|
835 |
#
|
|
836 |
# First, we take the outer send lock, because of Trio's standard
|
|
837 |
# semantics that wait_send_all_might_not_block and send_all
|
|
838 |
# conflict.
|
|
839 | 22 |
with self._outer_send_conflict_detector: |
840 | 22 |
self._check_status() |
841 |
# Then we take the inner send lock. We know that no other tasks
|
|
842 |
# are calling self.send_all or self.wait_send_all_might_not_block,
|
|
843 |
# because we have the outer_send_lock. But! There might be another
|
|
844 |
# task calling self.receive_some -> transport_stream.send_all, in
|
|
845 |
# which case if we were to call
|
|
846 |
# transport_stream.wait_send_all_might_not_block directly we'd
|
|
847 |
# have two tasks doing write-related operations on
|
|
848 |
# transport_stream simultaneously, which is not allowed. We
|
|
849 |
# *don't* want to raise this conflict to our caller, because it's
|
|
850 |
# purely an internal affair – all they did was call
|
|
851 |
# wait_send_all_might_not_block and receive_some at the same time,
|
|
852 |
# which is totally valid. And waiting for the lock is OK, because
|
|
853 |
# a call to send_all certainly wouldn't complete while the other
|
|
854 |
# task holds the lock.
|
|
855 | 22 |
async with self._inner_send_lock: |
856 |
# Now we have the lock, which creates another potential
|
|
857 |
# problem: what if a call to self.receive_some attempts to do
|
|
858 |
# transport_stream.send_all now? It'll have to wait for us to
|
|
859 |
# finish! But that's OK, because we release the lock as soon
|
|
860 |
# as the underlying stream becomes writable, and the
|
|
861 |
# self.receive_some call wasn't going to make any progress
|
|
862 |
# until then anyway.
|
|
863 |
#
|
|
864 |
# Of course, this does mean we might return *before* the
|
|
865 |
# stream is logically writable, because immediately after we
|
|
866 |
# return self.receive_some might write some data and make it
|
|
867 |
# non-writable again. But that's OK too,
|
|
868 |
# wait_send_all_might_not_block only guarantees that it
|
|
869 |
# doesn't return late.
|
|
870 | 22 |
await self.transport_stream.wait_send_all_might_not_block() |
871 |
|
|
872 |
|
|
873 | 22 |
class SSLListener(Listener[SSLStream], metaclass=SubclassingDeprecatedIn_v0_15_0): |
874 |
"""A :class:`~trio.abc.Listener` for SSL/TLS-encrypted servers.
|
|
875 |
|
|
876 |
:class:`SSLListener` wraps around another Listener, and converts
|
|
877 |
all incoming connections to encrypted connections by wrapping them
|
|
878 |
in a :class:`SSLStream`.
|
|
879 |
|
|
880 |
Args:
|
|
881 |
transport_listener (~trio.abc.Listener): The listener whose incoming
|
|
882 |
connections will be wrapped in :class:`SSLStream`.
|
|
883 |
|
|
884 |
ssl_context (~ssl.SSLContext): The :class:`~ssl.SSLContext` that will be
|
|
885 |
used for incoming connections.
|
|
886 |
|
|
887 |
https_compatible (bool): Passed on to :class:`SSLStream`.
|
|
888 |
|
|
889 |
Attributes:
|
|
890 |
transport_listener (trio.abc.Listener): The underlying listener that was
|
|
891 |
passed to ``__init__``.
|
|
892 |
|
|
893 |
"""
|
|
894 |
|
|
895 | 22 |
def __init__( |
896 |
self, |
|
897 |
transport_listener, |
|
898 |
ssl_context, |
|
899 |
*, |
|
900 |
https_compatible=False, |
|
901 |
max_refill_bytes="unused and deprecated", |
|
902 |
):
|
|
903 |
if max_refill_bytes != "unused and deprecated": |
|
904 | 22 |
warn_deprecated("max_refill_bytes=...", "0.12.0", issue=959, instead=None) |
905 | 22 |
self.transport_listener = transport_listener |
906 | 22 |
self._ssl_context = ssl_context |
907 | 22 |
self._https_compatible = https_compatible |
908 |
|
|
909 | 22 |
async def accept(self): |
910 |
"""Accept the next connection and wrap it in an :class:`SSLStream`.
|
|
911 |
|
|
912 |
See :meth:`trio.abc.Listener.accept` for details.
|
|
913 |
|
|
914 |
"""
|
|
915 | 22 |
transport_stream = await self.transport_listener.accept() |
916 | 22 |
return SSLStream( |
917 |
transport_stream, |
|
918 |
self._ssl_context, |
|
919 |
server_side=True, |
|
920 |
https_compatible=self._https_compatible, |
|
921 |
)
|
|
922 |
|
|
923 | 22 |
async def aclose(self): |
924 |
"""Close the transport listener."""
|
|
925 | 22 |
await self.transport_listener.aclose() |
Read our documentation on viewing source code .