Spamworldpro Mini Shell
Spamworldpro


Server : Apache/2.4.52 (Ubuntu)
System : Linux webserver 6.8.0-49-generic #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2 x86_64
User : www-data ( 33)
PHP Version : 8.1.2-1ubuntu2.21
Disable Function : NONE
Directory :  /lib/python3/dist-packages/landscape/lib/__pycache__/

Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 

Current File : //lib/python3/dist-packages/landscape/lib/__pycache__/fetch.cpython-310.pyc
o

���c��@s�ddlZddlZddlZddlmZddlmZddlmZddl	m
Z
mZGdd�de�Z
Gdd	�d	e
�ZGd
d�de
�Zdd
iddddddddfdd�Zdd�Zddd�Zd dd�Zd dd�Zdd�Zedkrteejdd��dSdS)!�N)�OptionParser)�DeferredList)�
deferToThread)�	iteritems�
networkStringc@seZdZdS)�
FetchErrorN)�__name__�
__module__�__qualname__�rr�5/usr/lib/python3/dist-packages/landscape/lib/fetch.pyrsrc@s$eZdZdd�Zdd�Zdd�ZdS)�
HTTPCodeErrorcC�||_||_dS�N)�	http_code�body)�selfrrrrr�__init__�
zHTTPCodeError.__init__cC�
d|jS)NzServer returned HTTP code %d�r�rrrr�__str__�
zHTTPCodeError.__str__cCr)Nz<HTTPCodeError http_code=%d>rrrrr�__repr__rzHTTPCodeError.__repr__N)rr	r
rrrrrrrr
sr
c@s0eZdZdd�Zdd�Zdd�Zedd��Zd	S)
�PyCurlErrorcCrr)�
error_code�_message)rr�messagerrrrrzPyCurlError.__init__cC�d|j|jfS)NzError %d: %s�rrrrrrr#szPyCurlError.__str__cCr)Nz<PyCurlError args=(%d, '%s')>r rrrrr&s�zPyCurlError.__repr__cCs|jSr)rrrrrr*szPyCurlError.messageN)rr	r
rrr�propertyrrrrrrsrF��iXTc
Csddl}t|t�s|�d�}t�|�}
t��}|dur|��}|�|jt	t
|���|rF|�|jd�|rF|�|jt
|��|�|j|
j�|rV|�d�rV|�|jt	|��|rh|�|jdd�tt|��D��|rq|�|jd�|	rz|�|jd	�|
dur�|�|jt	|
��|dur�|�|jt	|��|�|jd
�|�|j|�|�|jd	�|�|j|�|�|jd	�|�|j|j�|�|j d�|�|j!d�z|�"�Wn|j#y�}zt$|j%d|j%d	��d}~ww|�&�}|�'|j(�}|dkr�t)||��|S)
aCRetrieve a URL and return the content.

    @param url: The url to be fetched.
    @param post: If true, the POST method will be used (defaults to GET).
    @param data: Data to be sent to the server as the POST content.
    @param headers: Dictionary of header => value entries to be used on the
        request.
    @param curl: A pycurl.Curl instance to use. If not provided, one will be
        created.
    @param cainfo: Path to the file with CA certificates.
    @param insecure: If true, perform curl using insecure option which will
        not attempt to verify authenticity of the peer's certificate. (Used
        during autodiscovery)
    @param follow: If True, follow HTTP redirects (default True).
    @param user_agent: The user-agent to set in the request.
    @param proxy: The proxy url to use for the request.
    rNzutf-8Tzhttps:cSsg|]}d|�qS)z%s: %sr)�.0�pairrrr�
<listcomp>\szfetch.<locals>.<listcomp>F��sgzip,deflate��)*�pycurl�
isinstance�bytes�encode�io�BytesIO�Curl�setopt�URLr�str�POST�
POSTFIELDSIZE�len�READFUNCTION�read�
startswith�CAINFO�
HTTPHEADER�sortedr�SSL_VERIFYPEER�FOLLOWLOCATION�	USERAGENT�PROXY�	MAXREDIRS�CONNECTTIMEOUT�LOW_SPEED_LIMIT�LOW_SPEED_TIME�NOSIGNAL�
WRITEFUNCTION�write�DNS_CACHE_TIMEOUT�ENCODING�perform�errorr�args�getvalue�getinfo�	HTTP_CODEr
)�url�post�data�headers�cainfo�curl�connect_timeout�
total_timeout�insecure�follow�
user_agent�proxyr*�output�input�errrrr�fetch/s\


���
r_cOsttg|�Ri|��S)z]Retrieve a URL asynchronously.

    @return: A C{Deferred} resulting in the URL content.
    )rr_)rL�kwargsrrr�fetch_async�sracKsVg}|D]}t|fi|��}|r|�||�|r|�||�|�|�qt|ddd�S)a9
    Retrieve a list of URLs asynchronously.

    @param callback: Optionally, a function that will be fired one time for
        each successful URL, and will be passed its content and the URL itself.
    @param errback: Optionally, a function that will be fired one time for each
        failing URL, and will be passed the failure and the URL itself.
    @return: A C{DeferredList} whose callback chain will be fired as soon as
        all downloads have terminated. If an error occurs, the errback chain
        of the C{DeferredList} will be fired immediatly.
    T)�fireOnOneErrback�
consumeErrors)ra�addCallback�
addErrback�appendr)�urls�callback�errbackr`�resultsrP�resultrrr�fetch_many_async�srlcCs.|�d��d�d}|durtj�||�}|S)z�Return the last component of the given C{url}.

    @param url: The URL to get the filename from.
    @param directory: Optionally a path to prepend to the returned filename.

    @note: Any trailing slash in the C{url} will be removed
    �/���N)�rstrip�split�os�path�join)rP�	directory�filenamerrr�url_to_filename�srvcs.�fdd�}�fdd�}t|f||d�|��S)aJ
    Retrieve a list of URLs and save their content as files in a directory.

    @param urls: The list URLs to fetch.
    @param directory: The directory to save the files to, the name of the file
        will equal the last fragment of the URL.
    @param logger: Optional function to be used to log errors for failed URLs.
    cs,t|�d�}t|d�}|�|�|��dS)N�rt�wb)rv�openrG�close)rRrPru�fdrwrrrG�s

zfetch_to_files.<locals>.writecs�r
�d|t|j�f�|S)Nz Couldn't fetch file from %s (%s))r3�value)�failurerP)�loggerrr�	log_error�s

�z!fetch_to_files.<locals>.log_error)rhri)rl)rgrtr~r`rGrr)rtr~r�fetch_to_files�s
r�cCsZt�}|jddd�|jddd�|�d�|�|�\}\}tt||j|j|jd��dS)	Nz--post�
store_true)�actionz--datar")�defaultz--cainfo)rQrRrT)r�
add_option�
parse_args�printr_rQrRrT)rL�parser�optionsrPrrr�test�s
�r��__main__r')NNr)rq�sysr.�optparser�twisted.internet.deferr�twisted.internet.threadsr�twisted.python.compatrr�	Exceptionrr
rr_rarlrvr�r�r�argvrrrr�<module>s,

�R



�

Spamworldpro Mini