Ignore build outputs
This commit is contained in:
3
.gitignore
vendored
3
.gitignore
vendored
@@ -3,3 +3,6 @@
|
||||
|
||||
# Local install output
|
||||
/install/
|
||||
|
||||
# Build outputs
|
||||
/dist/
|
||||
|
||||
BIN
dist/plugin.video.viewit-0.1.46.zip
vendored
BIN
dist/plugin.video.viewit-0.1.46.zip
vendored
Binary file not shown.
598
dist/plugin.video.viewit/LICENSE.txt
vendored
598
dist/plugin.video.viewit/LICENSE.txt
vendored
@@ -1,598 +0,0 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for the
|
||||
work, and the source code for shared libraries and dynamically linked
|
||||
subprograms that the work is specifically designed to require, such as
|
||||
by intimate data communication or control flow between those subprograms
|
||||
and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified it,
|
||||
and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is released
|
||||
under this License and any conditions added under section 7. This
|
||||
requirement modifies the requirement in section 4 to "keep intact all
|
||||
notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this License
|
||||
to anyone who comes into possession of a copy. This License will
|
||||
therefore apply, along with any applicable section 7 additional terms,
|
||||
to the whole of the work, and all its parts, regardless of how they are
|
||||
packaged. This License gives no permission to license the work in any
|
||||
other way, but it does not invalidate such permission if you have
|
||||
separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your work
|
||||
need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium customarily
|
||||
used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a written
|
||||
offer, valid for at least three years and valid for as long as you
|
||||
offer spare parts or customer support for that product model, to give
|
||||
anyone who possesses the object code either (1) a copy of the
|
||||
Corresponding Source for all the software in the product that is
|
||||
covered by this License, on a durable physical medium customarily used
|
||||
for software interchange, for a price no more than your reasonable cost
|
||||
of physically performing this conveying of source, or (2) access to
|
||||
copy the Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This alternative
|
||||
is allowed only occasionally and noncommercially, and only if you
|
||||
received the object code with such an offer, in accord with subsection
|
||||
6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated place
|
||||
(gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to copy
|
||||
the object code is a network server, the Corresponding Source may be on
|
||||
a different server (operated by you or a third party) that supports
|
||||
equivalent copying facilities, provided you maintain clear directions
|
||||
next to the object code saying where to find the Corresponding Source.
|
||||
Regardless of what server hosts the Corresponding Source, you remain
|
||||
obligated to ensure that it is available for as long as needed to
|
||||
satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding Source
|
||||
of the work are being offered to the general public at no charge under
|
||||
subsection 6d.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that transaction
|
||||
who receives a copy of the work also receives whatever licenses to the
|
||||
work the party's predecessor in interest had or could give under the
|
||||
previous paragraph, plus a right to possession of the Corresponding
|
||||
Source of the work from the predecessor in interest, if the
|
||||
predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims owned
|
||||
or controlled by the contributor, whether already acquired or hereafter
|
||||
acquired, that would be infringed by some manner, permitted by this
|
||||
License, of making, using, or selling its contributor version, but do
|
||||
not include claims that would be infringed only as a consequence of
|
||||
further modification of the contributor version. For purposes of this
|
||||
definition, "control" includes the right to grant patent sublicenses in
|
||||
a manner consistent with the requirements of this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is conditioned
|
||||
on the non-exercise of one or more of the rights that are specifically
|
||||
granted under this License. You may not convey a covered work if you
|
||||
are a party to an arrangement with a third party that is in the business
|
||||
of distributing software, under which you make payment to the third
|
||||
party based on the extent of your activity of conveying the work, and
|
||||
under which the third party grants, to any of the parties who would
|
||||
receive the covered work from you, a discriminatory patent license (a)
|
||||
in connection with copies of the covered work conveyed by you (or
|
||||
copies made from those copies), or (b) primarily for and in connection
|
||||
with specific products or compilations that contain the covered work,
|
||||
unless you entered into that arrangement, or that patent license was
|
||||
granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not convey it at all. For example, if you agree to terms that
|
||||
obligate you to collect a royalty for further conveying from those to
|
||||
whom you convey the Program, the only way you could satisfy both those
|
||||
terms and this License would be to refrain entirely from conveying the
|
||||
Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
13
dist/plugin.video.viewit/NOTICE.txt
vendored
13
dist/plugin.video.viewit/NOTICE.txt
vendored
@@ -1,13 +0,0 @@
|
||||
Copyright (C) 2026 ViewIt contributors
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
This Kodi addon depends on `script.module.resolveurl`.
|
||||
11
dist/plugin.video.viewit/README_DEPENDENCIES.txt
vendored
11
dist/plugin.video.viewit/README_DEPENDENCIES.txt
vendored
@@ -1,11 +0,0 @@
|
||||
Abhaengigkeiten fuer Serienstream-Plugin:
|
||||
- Python-Paket: requests
|
||||
- Python-Paket: beautifulsoup4
|
||||
- Kodi-Addon: script.module.resolveurl
|
||||
|
||||
Hinweis:
|
||||
Kodi nutzt sein eigenes Python. Installiere Pakete in die Kodi-Python-Umgebung
|
||||
oder nutze ein Kodi-Addon, das Python-Pakete mitliefert.
|
||||
|
||||
Lizenz:
|
||||
Dieses Kodi-Addon ist GPL-3.0-or-later (siehe `LICENSE.txt`).
|
||||
21
dist/plugin.video.viewit/addon.xml
vendored
21
dist/plugin.video.viewit/addon.xml
vendored
@@ -1,21 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.46" provider-name="ViewIt">
|
||||
<requires>
|
||||
<import addon="xbmc.python" version="3.0.0" />
|
||||
<import addon="script.module.requests" />
|
||||
<import addon="script.module.beautifulsoup4" />
|
||||
<import addon="script.module.resolveurl" />
|
||||
</requires>
|
||||
<extension point="xbmc.python.pluginsource" library="default.py">
|
||||
<provides>video</provides>
|
||||
</extension>
|
||||
<extension point="xbmc.addon.metadata">
|
||||
<summary>ViewIt Kodi Plugin</summary>
|
||||
<description>Streaming-Addon für Streamingseiten: Suche, Staffeln/Episoden und Wiedergabe.</description>
|
||||
<assets>
|
||||
<icon>icon.png</icon>
|
||||
</assets>
|
||||
<license>GPL-3.0-or-later</license>
|
||||
<platform>all</platform>
|
||||
</extension>
|
||||
</addon>
|
||||
2417
dist/plugin.video.viewit/default.py
vendored
2417
dist/plugin.video.viewit/default.py
vendored
File diff suppressed because it is too large
Load Diff
34
dist/plugin.video.viewit/http_session_pool.py
vendored
34
dist/plugin.video.viewit/http_session_pool.py
vendored
@@ -1,34 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Shared requests.Session pooling for plugins.
|
||||
|
||||
Goal: reuse TCP connections/cookies across multiple HTTP calls within a Kodi session.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
try: # pragma: no cover - optional dependency
|
||||
import requests
|
||||
except Exception: # pragma: no cover
|
||||
requests = None
|
||||
|
||||
_SESSIONS: Dict[str, Any] = {}
|
||||
|
||||
|
||||
def get_requests_session(key: str, *, headers: Optional[dict[str, str]] = None):
|
||||
"""Return a cached `requests.Session()` for the given key."""
|
||||
if requests is None:
|
||||
raise RuntimeError("requests ist nicht verfuegbar.")
|
||||
key = (key or "").strip() or "default"
|
||||
session = _SESSIONS.get(key)
|
||||
if session is None:
|
||||
session = requests.Session()
|
||||
_SESSIONS[key] = session
|
||||
if headers:
|
||||
try:
|
||||
session.headers.update({str(k): str(v) for k, v in headers.items() if k and v})
|
||||
except Exception:
|
||||
pass
|
||||
return session
|
||||
|
||||
BIN
dist/plugin.video.viewit/icon.png
vendored
BIN
dist/plugin.video.viewit/icon.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 97 KiB |
128
dist/plugin.video.viewit/plugin_helpers.py
vendored
128
dist/plugin.video.viewit/plugin_helpers.py
vendored
@@ -1,128 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Shared helpers for ViewIt plugins.
|
||||
|
||||
Focus:
|
||||
- Kodi addon settings access (string/bool)
|
||||
- Optional URL notifications
|
||||
- Optional URL logging
|
||||
- Optional HTML response dumps
|
||||
|
||||
Designed to work both in Kodi and outside Kodi (for linting/tests).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
import hashlib
|
||||
import os
|
||||
from typing import Optional
|
||||
|
||||
try: # pragma: no cover - Kodi runtime
|
||||
import xbmcaddon # type: ignore[import-not-found]
|
||||
import xbmcvfs # type: ignore[import-not-found]
|
||||
import xbmcgui # type: ignore[import-not-found]
|
||||
except ImportError: # pragma: no cover - allow importing outside Kodi
|
||||
xbmcaddon = None
|
||||
xbmcvfs = None
|
||||
xbmcgui = None
|
||||
|
||||
|
||||
def get_setting_string(addon_id: str, setting_id: str, *, default: str = "") -> str:
|
||||
if xbmcaddon is None:
|
||||
return default
|
||||
try:
|
||||
addon = xbmcaddon.Addon(addon_id)
|
||||
getter = getattr(addon, "getSettingString", None)
|
||||
if getter is not None:
|
||||
return str(getter(setting_id) or "").strip()
|
||||
return str(addon.getSetting(setting_id) or "").strip()
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
|
||||
def get_setting_bool(addon_id: str, setting_id: str, *, default: bool = False) -> bool:
|
||||
if xbmcaddon is None:
|
||||
return default
|
||||
try:
|
||||
addon = xbmcaddon.Addon(addon_id)
|
||||
getter = getattr(addon, "getSettingBool", None)
|
||||
if getter is not None:
|
||||
return bool(getter(setting_id))
|
||||
raw = addon.getSetting(setting_id)
|
||||
return str(raw).strip().lower() in {"1", "true", "yes", "on"}
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
|
||||
def notify_url(addon_id: str, *, heading: str, url: str, enabled_setting_id: str) -> None:
|
||||
if xbmcgui is None:
|
||||
return
|
||||
if not get_setting_bool(addon_id, enabled_setting_id, default=False):
|
||||
return
|
||||
try:
|
||||
xbmcgui.Dialog().notification(heading, url, xbmcgui.NOTIFICATION_INFO, 3000)
|
||||
except Exception:
|
||||
return
|
||||
|
||||
|
||||
def _profile_logs_dir(addon_id: str) -> Optional[str]:
|
||||
if xbmcaddon is None or xbmcvfs is None:
|
||||
return None
|
||||
try:
|
||||
addon = xbmcaddon.Addon(addon_id)
|
||||
profile = xbmcvfs.translatePath(addon.getAddonInfo("profile"))
|
||||
log_dir = os.path.join(profile, "logs")
|
||||
if not xbmcvfs.exists(log_dir):
|
||||
xbmcvfs.mkdirs(log_dir)
|
||||
return log_dir
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _append_text_file(path: str, content: str) -> None:
|
||||
try:
|
||||
with open(path, "a", encoding="utf-8") as handle:
|
||||
handle.write(content)
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
if xbmcvfs is None:
|
||||
return
|
||||
try:
|
||||
handle = xbmcvfs.File(path, "a")
|
||||
handle.write(content)
|
||||
handle.close()
|
||||
except Exception:
|
||||
return
|
||||
|
||||
|
||||
def log_url(addon_id: str, *, enabled_setting_id: str, log_filename: str, url: str, kind: str = "VISIT") -> None:
|
||||
if not get_setting_bool(addon_id, enabled_setting_id, default=False):
|
||||
return
|
||||
timestamp = datetime.utcnow().isoformat(timespec="seconds") + "Z"
|
||||
line = f"{timestamp}\t{kind}\t{url}\n"
|
||||
log_dir = _profile_logs_dir(addon_id)
|
||||
if log_dir:
|
||||
_append_text_file(os.path.join(log_dir, log_filename), line)
|
||||
return
|
||||
_append_text_file(os.path.join(os.path.dirname(__file__), log_filename), line)
|
||||
|
||||
|
||||
def dump_response_html(
|
||||
addon_id: str,
|
||||
*,
|
||||
enabled_setting_id: str,
|
||||
url: str,
|
||||
body: str,
|
||||
filename_prefix: str,
|
||||
) -> None:
|
||||
if not get_setting_bool(addon_id, enabled_setting_id, default=False):
|
||||
return
|
||||
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S_%f")
|
||||
digest = hashlib.md5(url.encode("utf-8")).hexdigest() # nosec - filename only
|
||||
filename = f"{filename_prefix}_{timestamp}_{digest}.html"
|
||||
log_dir = _profile_logs_dir(addon_id)
|
||||
path = os.path.join(log_dir, filename) if log_dir else os.path.join(os.path.dirname(__file__), filename)
|
||||
content = f"<!-- {url} -->\n{body or ''}"
|
||||
_append_text_file(path, content)
|
||||
|
||||
55
dist/plugin.video.viewit/plugin_interface.py
vendored
55
dist/plugin.video.viewit/plugin_interface.py
vendored
@@ -1,55 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Gemeinsame Schnittstelle fuer Kodi-Plugins."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import List, Optional, Set
|
||||
|
||||
|
||||
class BasisPlugin(ABC):
|
||||
"""Abstrakte Basisklasse fuer alle Integrationen."""
|
||||
|
||||
name: str
|
||||
|
||||
@abstractmethod
|
||||
async def search_titles(self, query: str) -> List[str]:
|
||||
"""Liefert eine Liste aller Treffer fuer die Suche."""
|
||||
|
||||
@abstractmethod
|
||||
def seasons_for(self, title: str) -> List[str]:
|
||||
"""Liefert alle Staffeln zu einem Titel."""
|
||||
|
||||
@abstractmethod
|
||||
def episodes_for(self, title: str, season: str) -> List[str]:
|
||||
"""Liefert alle Folgen zu einer Staffel."""
|
||||
|
||||
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
|
||||
"""Optional: Liefert den Stream-Link fuer eine konkrete Folge."""
|
||||
return None
|
||||
|
||||
def resolve_stream_link(self, link: str) -> Optional[str]:
|
||||
"""Optional: Folgt einem Stream-Link und liefert die finale URL."""
|
||||
return None
|
||||
|
||||
def genres(self) -> List[str]:
|
||||
"""Optional: Liefert eine Liste an Genres (falls verfügbar)."""
|
||||
return []
|
||||
|
||||
def titles_for_genre(self, genre: str) -> List[str]:
|
||||
"""Optional: Liefert alle Serientitel zu einem Genre."""
|
||||
return []
|
||||
|
||||
def capabilities(self) -> Set[str]:
|
||||
"""Optional: Liefert eine Menge an Features/Capabilities dieses Plugins.
|
||||
|
||||
Beispiele:
|
||||
- `popular_series`: Plugin kann eine Liste beliebter Serien liefern.
|
||||
"""
|
||||
|
||||
return set()
|
||||
|
||||
def popular_series(self) -> List[str]:
|
||||
"""Optional: Liefert eine Liste beliebter Serien (als Titel-Strings)."""
|
||||
|
||||
return []
|
||||
1
dist/plugin.video.viewit/plugins/__init__.py
vendored
1
dist/plugin.video.viewit/plugins/__init__.py
vendored
@@ -1 +0,0 @@
|
||||
"""Kodi addon plugins."""
|
||||
127
dist/plugin.video.viewit/plugins/_template_plugin.py
vendored
127
dist/plugin.video.viewit/plugins/_template_plugin.py
vendored
@@ -1,127 +0,0 @@
|
||||
"""Template fuer ein neues ViewIt-Plugin (Basis: serienstream_plugin).
|
||||
|
||||
Diese Datei wird NICHT automatisch geladen (Dateiname beginnt mit `_`).
|
||||
Zum Verwenden:
|
||||
1) Kopiere/benenne die Datei um (ohne fuehrenden Unterstrich), z.B. `my_site_plugin.py`
|
||||
2) Passe `name`, `BASE_URL` und die Implementierungen an.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Any, List, Optional, TypeAlias
|
||||
|
||||
try: # pragma: no cover - optional dependency
|
||||
import requests
|
||||
from bs4 import BeautifulSoup # type: ignore[import-not-found]
|
||||
except ImportError as exc: # pragma: no cover - optional dependency
|
||||
requests = None
|
||||
BeautifulSoup = None
|
||||
REQUESTS_AVAILABLE = False
|
||||
REQUESTS_IMPORT_ERROR = exc
|
||||
else:
|
||||
REQUESTS_AVAILABLE = True
|
||||
REQUESTS_IMPORT_ERROR = None
|
||||
|
||||
try: # pragma: no cover - optional Kodi helpers
|
||||
import xbmcaddon # type: ignore[import-not-found]
|
||||
except ImportError: # pragma: no cover - allow running outside Kodi
|
||||
xbmcaddon = None
|
||||
|
||||
from plugin_interface import BasisPlugin
|
||||
|
||||
if TYPE_CHECKING: # pragma: no cover
|
||||
from requests import Session as RequestsSession
|
||||
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
|
||||
else: # pragma: no cover
|
||||
RequestsSession: TypeAlias = Any
|
||||
BeautifulSoupT: TypeAlias = Any
|
||||
|
||||
|
||||
ADDON_ID = "plugin.video.viewit"
|
||||
BASE_URL = "https://example.com"
|
||||
DEFAULT_TIMEOUT = 20
|
||||
HEADERS = {
|
||||
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
|
||||
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
|
||||
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
|
||||
"Connection": "keep-alive",
|
||||
}
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TitleHit:
|
||||
"""Ein Suchtreffer mit Titel und Detail-URL."""
|
||||
|
||||
title: str
|
||||
url: str
|
||||
|
||||
|
||||
class TemplatePlugin(BasisPlugin):
|
||||
"""Vorlage fuer eine Streamingseiten-Integration.
|
||||
|
||||
Optional kann ein Plugin Capabilities deklarieren (z.B. `popular_series`),
|
||||
damit der Router passende Menüpunkte anbieten kann.
|
||||
"""
|
||||
|
||||
name = "Template"
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._session: RequestsSession | None = None
|
||||
|
||||
@property
|
||||
def is_available(self) -> bool:
|
||||
return REQUESTS_AVAILABLE
|
||||
|
||||
@property
|
||||
def unavailable_reason(self) -> str:
|
||||
if REQUESTS_AVAILABLE:
|
||||
return ""
|
||||
return f"requests/bs4 nicht verfuegbar: {REQUESTS_IMPORT_ERROR}"
|
||||
|
||||
def _get_session(self) -> RequestsSession:
|
||||
if requests is None:
|
||||
raise RuntimeError(self.unavailable_reason)
|
||||
if self._session is None:
|
||||
session = requests.Session()
|
||||
session.headers.update(HEADERS)
|
||||
self._session = session
|
||||
return self._session
|
||||
|
||||
async def search_titles(self, query: str) -> List[str]:
|
||||
"""TODO: Suche auf der Zielseite implementieren."""
|
||||
_ = query
|
||||
return []
|
||||
|
||||
def seasons_for(self, title: str) -> List[str]:
|
||||
"""TODO: Staffeln fuer einen Titel liefern."""
|
||||
_ = title
|
||||
return []
|
||||
|
||||
def episodes_for(self, title: str, season: str) -> List[str]:
|
||||
"""TODO: Episoden fuer Titel+Staffel liefern."""
|
||||
_ = (title, season)
|
||||
return []
|
||||
|
||||
def capabilities(self) -> set[str]:
|
||||
"""Optional: Deklariert Fähigkeiten dieses Plugins.
|
||||
|
||||
Beispiele:
|
||||
- `popular_series`: Plugin kann beliebte Serien liefern
|
||||
- `genres`: Plugin unterstützt Genre-Browser
|
||||
"""
|
||||
|
||||
return set()
|
||||
|
||||
def popular_series(self) -> List[str]:
|
||||
"""Optional: Liste beliebter Serien (nur wenn `popular_series` gesetzt ist)."""
|
||||
return []
|
||||
|
||||
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
|
||||
"""Optional: Embed-/Hoster-Link fuer eine Episode."""
|
||||
_ = (title, season, episode)
|
||||
return None
|
||||
|
||||
def resolve_stream_link(self, link: str) -> Optional[str]:
|
||||
"""Optional: Redirect-/Mirror-Aufloesung."""
|
||||
return link
|
||||
877
dist/plugin.video.viewit/plugins/aniworld_plugin.py
vendored
877
dist/plugin.video.viewit/plugins/aniworld_plugin.py
vendored
@@ -1,877 +0,0 @@
|
||||
"""AniWorld (aniworld.to) Integration als Downloader-Plugin.
|
||||
|
||||
Dieses Plugin ist weitgehend kompatibel zur Serienstream-Integration:
|
||||
- gleiche Staffel-/Episoden-URL-Struktur (/staffel-x/episode-y)
|
||||
- gleiche Hoster-/Watch-Layouts (best-effort)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, TypeAlias
|
||||
|
||||
try: # pragma: no cover - optional dependency
|
||||
import requests
|
||||
from bs4 import BeautifulSoup # type: ignore[import-not-found]
|
||||
except ImportError as exc: # pragma: no cover - optional dependency
|
||||
requests = None
|
||||
BeautifulSoup = None
|
||||
REQUESTS_AVAILABLE = False
|
||||
REQUESTS_IMPORT_ERROR = exc
|
||||
else:
|
||||
REQUESTS_AVAILABLE = True
|
||||
REQUESTS_IMPORT_ERROR = None
|
||||
|
||||
try: # pragma: no cover - optional Kodi helpers
|
||||
import xbmcaddon # type: ignore[import-not-found]
|
||||
except ImportError: # pragma: no cover - allow running outside Kodi
|
||||
xbmcaddon = None
|
||||
|
||||
from plugin_interface import BasisPlugin
|
||||
from plugin_helpers import dump_response_html, get_setting_bool, log_url, notify_url
|
||||
from http_session_pool import get_requests_session
|
||||
from regex_patterns import DIGITS, SEASON_EPISODE_TAG, SEASON_EPISODE_URL, STAFFEL_NUM_IN_URL
|
||||
|
||||
if TYPE_CHECKING: # pragma: no cover
|
||||
from requests import Session as RequestsSession
|
||||
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
|
||||
else: # pragma: no cover
|
||||
RequestsSession: TypeAlias = Any
|
||||
BeautifulSoupT: TypeAlias = Any
|
||||
|
||||
|
||||
BASE_URL = "https://aniworld.to"
|
||||
ANIME_BASE_URL = f"{BASE_URL}/anime/stream"
|
||||
POPULAR_ANIMES_URL = f"{BASE_URL}/beliebte-animes"
|
||||
GENRES_URL = f"{BASE_URL}/animes"
|
||||
LATEST_EPISODES_URL = f"{BASE_URL}/neue-episoden"
|
||||
SEARCH_URL = f"{BASE_URL}/search?q={{query}}"
|
||||
SEARCH_API_URL = f"{BASE_URL}/ajax/search"
|
||||
DEFAULT_PREFERRED_HOSTERS = ["voe"]
|
||||
DEFAULT_TIMEOUT = 20
|
||||
ADDON_ID = "plugin.video.viewit"
|
||||
GLOBAL_SETTING_LOG_URLS = "debug_log_urls"
|
||||
GLOBAL_SETTING_DUMP_HTML = "debug_dump_html"
|
||||
GLOBAL_SETTING_SHOW_URL_INFO = "debug_show_url_info"
|
||||
HEADERS = {
|
||||
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
|
||||
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
|
||||
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
|
||||
"Connection": "keep-alive",
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class SeriesResult:
|
||||
title: str
|
||||
description: str
|
||||
url: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class EpisodeInfo:
|
||||
number: int
|
||||
title: str
|
||||
original_title: str
|
||||
url: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class LatestEpisode:
|
||||
series_title: str
|
||||
season: int
|
||||
episode: int
|
||||
url: str
|
||||
airdate: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class SeasonInfo:
|
||||
number: int
|
||||
url: str
|
||||
episodes: List[EpisodeInfo]
|
||||
|
||||
|
||||
def _absolute_url(href: str) -> str:
|
||||
return f"{BASE_URL}{href}" if href.startswith("/") else href
|
||||
|
||||
|
||||
def _log_url(url: str, *, kind: str = "VISIT") -> None:
|
||||
log_url(ADDON_ID, enabled_setting_id=GLOBAL_SETTING_LOG_URLS, log_filename="aniworld_urls.log", url=url, kind=kind)
|
||||
|
||||
|
||||
def _log_visit(url: str) -> None:
|
||||
_log_url(url, kind="VISIT")
|
||||
notify_url(ADDON_ID, heading="AniWorld", url=url, enabled_setting_id=GLOBAL_SETTING_SHOW_URL_INFO)
|
||||
|
||||
|
||||
def _log_parsed_url(url: str) -> None:
|
||||
_log_url(url, kind="PARSE")
|
||||
|
||||
|
||||
def _log_response_html(url: str, body: str) -> None:
|
||||
dump_response_html(
|
||||
ADDON_ID,
|
||||
enabled_setting_id=GLOBAL_SETTING_DUMP_HTML,
|
||||
url=url,
|
||||
body=body,
|
||||
filename_prefix="aniworld_response",
|
||||
)
|
||||
|
||||
|
||||
def _normalize_search_text(value: str) -> str:
|
||||
value = (value or "").casefold()
|
||||
value = re.sub(r"[^a-z0-9]+", " ", value)
|
||||
value = re.sub(r"\s+", " ", value).strip()
|
||||
return value
|
||||
|
||||
|
||||
def _strip_html(text: str) -> str:
|
||||
if not text:
|
||||
return ""
|
||||
return re.sub(r"<[^>]+>", "", text)
|
||||
|
||||
|
||||
def _matches_query(query: str, *, title: str) -> bool:
|
||||
normalized_query = _normalize_search_text(query)
|
||||
if not normalized_query:
|
||||
return False
|
||||
haystack = _normalize_search_text(title)
|
||||
if not haystack:
|
||||
return False
|
||||
return normalized_query in haystack
|
||||
|
||||
|
||||
def _ensure_requests() -> None:
|
||||
if requests is None or BeautifulSoup is None:
|
||||
raise RuntimeError("requests/bs4 sind nicht verfuegbar.")
|
||||
|
||||
|
||||
def _looks_like_cloudflare_challenge(body: str) -> bool:
|
||||
lower = body.lower()
|
||||
markers = (
|
||||
"cf-browser-verification",
|
||||
"cf-challenge",
|
||||
"cf_chl",
|
||||
"challenge-platform",
|
||||
"attention required! | cloudflare",
|
||||
"just a moment...",
|
||||
"cloudflare ray id",
|
||||
)
|
||||
return any(marker in lower for marker in markers)
|
||||
|
||||
|
||||
def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> BeautifulSoupT:
|
||||
_ensure_requests()
|
||||
_log_visit(url)
|
||||
sess = session or get_requests_session("aniworld", headers=HEADERS)
|
||||
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
if response.url and response.url != url:
|
||||
_log_url(response.url, kind="REDIRECT")
|
||||
_log_response_html(url, response.text)
|
||||
if _looks_like_cloudflare_challenge(response.text):
|
||||
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
|
||||
return BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
|
||||
def _get_soup_simple(url: str) -> BeautifulSoupT:
|
||||
_ensure_requests()
|
||||
_log_visit(url)
|
||||
sess = get_requests_session("aniworld", headers=HEADERS)
|
||||
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
if response.url and response.url != url:
|
||||
_log_url(response.url, kind="REDIRECT")
|
||||
_log_response_html(url, response.text)
|
||||
if _looks_like_cloudflare_challenge(response.text):
|
||||
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
|
||||
return BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
|
||||
def _post_json(url: str, *, payload: Dict[str, str], session: Optional[RequestsSession] = None) -> Any:
|
||||
_ensure_requests()
|
||||
_log_visit(url)
|
||||
sess = session or get_requests_session("aniworld", headers=HEADERS)
|
||||
response = sess.post(url, data=payload, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
if response.url and response.url != url:
|
||||
_log_url(response.url, kind="REDIRECT")
|
||||
_log_response_html(url, response.text)
|
||||
if _looks_like_cloudflare_challenge(response.text):
|
||||
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
|
||||
try:
|
||||
return response.json()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _extract_canonical_url(soup: BeautifulSoupT, fallback: str) -> str:
|
||||
canonical = soup.select_one('link[rel="canonical"][href]')
|
||||
href = (canonical.get("href") if canonical else "") or ""
|
||||
href = href.strip()
|
||||
if href.startswith("http://") or href.startswith("https://"):
|
||||
return href.rstrip("/")
|
||||
return fallback.rstrip("/")
|
||||
|
||||
|
||||
def _series_root_url(url: str) -> str:
|
||||
normalized = (url or "").strip().rstrip("/")
|
||||
normalized = re.sub(r"/staffel-\d+(?:/.*)?$", "", normalized)
|
||||
normalized = re.sub(r"/episode-\d+(?:/.*)?$", "", normalized)
|
||||
return normalized.rstrip("/")
|
||||
|
||||
|
||||
def _extract_season_links(soup: BeautifulSoupT) -> List[Tuple[int, str]]:
|
||||
season_links: List[Tuple[int, str]] = []
|
||||
seen_numbers: set[int] = set()
|
||||
for anchor in soup.select('.hosterSiteDirectNav a[href*="/staffel-"]'):
|
||||
href = anchor.get("href") or ""
|
||||
if "/episode-" in href:
|
||||
continue
|
||||
match = re.search(STAFFEL_NUM_IN_URL, href)
|
||||
if match:
|
||||
number = int(match.group(1))
|
||||
else:
|
||||
label = anchor.get_text(strip=True)
|
||||
if not label.isdigit():
|
||||
continue
|
||||
number = int(label)
|
||||
if number in seen_numbers:
|
||||
continue
|
||||
seen_numbers.add(number)
|
||||
season_url = _absolute_url(href)
|
||||
if season_url:
|
||||
_log_parsed_url(season_url)
|
||||
season_links.append((number, season_url))
|
||||
season_links.sort(key=lambda item: item[0])
|
||||
return season_links
|
||||
|
||||
|
||||
def _extract_number_of_seasons(soup: BeautifulSoupT) -> Optional[int]:
|
||||
tag = soup.select_one('meta[itemprop="numberOfSeasons"]')
|
||||
if not tag:
|
||||
return None
|
||||
content = (tag.get("content") or "").strip()
|
||||
if not content.isdigit():
|
||||
return None
|
||||
count = int(content)
|
||||
return count if count > 0 else None
|
||||
|
||||
|
||||
def _extract_episodes(soup: BeautifulSoupT) -> List[EpisodeInfo]:
|
||||
episodes: List[EpisodeInfo] = []
|
||||
rows = soup.select("table.seasonEpisodesList tbody tr")
|
||||
for index, row in enumerate(rows):
|
||||
cells = row.find_all("td")
|
||||
if not cells:
|
||||
continue
|
||||
episode_cell = cells[0]
|
||||
number_text = episode_cell.get_text(strip=True)
|
||||
digits = "".join(ch for ch in number_text if ch.isdigit())
|
||||
number = int(digits) if digits else index + 1
|
||||
link = episode_cell.find("a")
|
||||
href = link.get("href") if link else ""
|
||||
url = _absolute_url(href or "")
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
|
||||
title_tag = row.select_one(".seasonEpisodeTitle strong")
|
||||
original_tag = row.select_one(".seasonEpisodeTitle span")
|
||||
title = title_tag.get_text(strip=True) if title_tag else ""
|
||||
original_title = original_tag.get_text(strip=True) if original_tag else ""
|
||||
|
||||
if url:
|
||||
episodes.append(EpisodeInfo(number=number, title=title, original_title=original_title, url=url))
|
||||
return episodes
|
||||
|
||||
|
||||
_LATEST_EPISODE_TAG_RE = re.compile(SEASON_EPISODE_TAG, re.IGNORECASE)
|
||||
_LATEST_EPISODE_URL_RE = re.compile(SEASON_EPISODE_URL, re.IGNORECASE)
|
||||
|
||||
|
||||
def _extract_latest_episodes(soup: BeautifulSoupT) -> List[LatestEpisode]:
|
||||
episodes: List[LatestEpisode] = []
|
||||
seen: set[str] = set()
|
||||
|
||||
for anchor in soup.select(".newEpisodeList a[href]"):
|
||||
href = (anchor.get("href") or "").strip()
|
||||
if not href or "/anime/stream/" not in href:
|
||||
continue
|
||||
url = _absolute_url(href)
|
||||
if not url:
|
||||
continue
|
||||
|
||||
title_tag = anchor.select_one("strong")
|
||||
series_title = (title_tag.get_text(strip=True) if title_tag else "").strip()
|
||||
if not series_title:
|
||||
continue
|
||||
|
||||
season_number: Optional[int] = None
|
||||
episode_number: Optional[int] = None
|
||||
|
||||
match = _LATEST_EPISODE_URL_RE.search(href)
|
||||
if match:
|
||||
season_number = int(match.group(1))
|
||||
episode_number = int(match.group(2))
|
||||
|
||||
if season_number is None or episode_number is None:
|
||||
tag_node = (
|
||||
anchor.select_one("span.listTag.bigListTag.blue2")
|
||||
or anchor.select_one("span.listTag.blue2")
|
||||
or anchor.select_one("span.blue2")
|
||||
)
|
||||
tag_text = (tag_node.get_text(" ", strip=True) if tag_node else "").strip()
|
||||
match = _LATEST_EPISODE_TAG_RE.search(tag_text)
|
||||
if not match:
|
||||
continue
|
||||
season_number = int(match.group(1))
|
||||
episode_number = int(match.group(2))
|
||||
|
||||
if season_number is None or episode_number is None:
|
||||
continue
|
||||
|
||||
airdate_node = anchor.select_one("span.elementFloatRight")
|
||||
airdate = (airdate_node.get_text(" ", strip=True) if airdate_node else "").strip()
|
||||
|
||||
key = f"{url}\t{season_number}\t{episode_number}"
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
|
||||
_log_parsed_url(url)
|
||||
episodes.append(
|
||||
LatestEpisode(
|
||||
series_title=series_title,
|
||||
season=season_number,
|
||||
episode=episode_number,
|
||||
url=url,
|
||||
airdate=airdate,
|
||||
)
|
||||
)
|
||||
|
||||
return episodes
|
||||
|
||||
|
||||
def scrape_anime_detail(anime_identifier: str, max_seasons: Optional[int] = None) -> List[SeasonInfo]:
|
||||
_ensure_requests()
|
||||
anime_url = _series_root_url(_absolute_url(anime_identifier))
|
||||
_log_url(anime_url, kind="ANIME")
|
||||
session = get_requests_session("aniworld", headers=HEADERS)
|
||||
try:
|
||||
_get_soup(BASE_URL, session=session)
|
||||
except Exception:
|
||||
pass
|
||||
soup = _get_soup(anime_url, session=session)
|
||||
|
||||
base_anime_url = _series_root_url(_extract_canonical_url(soup, anime_url))
|
||||
season_links = _extract_season_links(soup)
|
||||
season_count = _extract_number_of_seasons(soup)
|
||||
if season_count and (not season_links or len(season_links) < season_count):
|
||||
existing = {number for number, _ in season_links}
|
||||
for number in range(1, season_count + 1):
|
||||
if number in existing:
|
||||
continue
|
||||
season_url = f"{base_anime_url}/staffel-{number}"
|
||||
_log_parsed_url(season_url)
|
||||
season_links.append((number, season_url))
|
||||
season_links.sort(key=lambda item: item[0])
|
||||
if max_seasons is not None:
|
||||
season_links = season_links[:max_seasons]
|
||||
|
||||
seasons: List[SeasonInfo] = []
|
||||
for number, url in season_links:
|
||||
season_soup = _get_soup(url, session=session)
|
||||
episodes = _extract_episodes(season_soup)
|
||||
seasons.append(SeasonInfo(number=number, url=url, episodes=episodes))
|
||||
seasons.sort(key=lambda s: s.number)
|
||||
return seasons
|
||||
|
||||
|
||||
def resolve_redirect(target_url: str) -> Optional[str]:
|
||||
_ensure_requests()
|
||||
normalized_url = _absolute_url(target_url)
|
||||
_log_visit(normalized_url)
|
||||
session = get_requests_session("aniworld", headers=HEADERS)
|
||||
_get_soup(BASE_URL, session=session)
|
||||
response = session.get(normalized_url, headers=HEADERS, timeout=DEFAULT_TIMEOUT, allow_redirects=True)
|
||||
if response.url:
|
||||
_log_url(response.url, kind="RESOLVED")
|
||||
return response.url if response.url else None
|
||||
|
||||
|
||||
def fetch_episode_hoster_names(episode_url: str) -> List[str]:
|
||||
_ensure_requests()
|
||||
normalized_url = _absolute_url(episode_url)
|
||||
session = get_requests_session("aniworld", headers=HEADERS)
|
||||
_get_soup(BASE_URL, session=session)
|
||||
soup = _get_soup(normalized_url, session=session)
|
||||
names: List[str] = []
|
||||
seen: set[str] = set()
|
||||
for anchor in soup.select(".hosterSiteVideo a.watchEpisode"):
|
||||
title = anchor.select_one("h4")
|
||||
name = title.get_text(strip=True) if title else ""
|
||||
if not name:
|
||||
name = anchor.get_text(" ", strip=True)
|
||||
name = (name or "").strip()
|
||||
if name.lower().startswith("hoster "):
|
||||
name = name[7:].strip()
|
||||
href = anchor.get("href") or ""
|
||||
url = _absolute_url(href)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
key = name.casefold().strip()
|
||||
if not key or key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
names.append(name)
|
||||
if names:
|
||||
_log_url(f"{normalized_url}#hosters={','.join(names)}", kind="HOSTERS")
|
||||
return names
|
||||
|
||||
|
||||
def fetch_episode_stream_link(
|
||||
episode_url: str,
|
||||
*,
|
||||
preferred_hosters: Optional[List[str]] = None,
|
||||
) -> Optional[str]:
|
||||
_ensure_requests()
|
||||
normalized_url = _absolute_url(episode_url)
|
||||
preferred = [hoster.lower() for hoster in (preferred_hosters or DEFAULT_PREFERRED_HOSTERS)]
|
||||
session = get_requests_session("aniworld", headers=HEADERS)
|
||||
_get_soup(BASE_URL, session=session)
|
||||
soup = _get_soup(normalized_url, session=session)
|
||||
candidates: List[Tuple[str, str]] = []
|
||||
for anchor in soup.select(".hosterSiteVideo a.watchEpisode"):
|
||||
name_tag = anchor.select_one("h4")
|
||||
name = name_tag.get_text(strip=True) if name_tag else ""
|
||||
href = anchor.get("href") or ""
|
||||
url = _absolute_url(href)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
if name and url:
|
||||
candidates.append((name, url))
|
||||
if not candidates:
|
||||
return None
|
||||
candidates.sort(key=lambda item: item[0].casefold())
|
||||
selected_url = None
|
||||
for wanted in preferred:
|
||||
for name, url in candidates:
|
||||
if wanted in name.casefold():
|
||||
selected_url = url
|
||||
break
|
||||
if selected_url:
|
||||
break
|
||||
if not selected_url:
|
||||
selected_url = candidates[0][1]
|
||||
resolved = resolve_redirect(selected_url) or selected_url
|
||||
return resolved
|
||||
|
||||
|
||||
def search_animes(query: str) -> List[SeriesResult]:
|
||||
_ensure_requests()
|
||||
query = (query or "").strip()
|
||||
if not query:
|
||||
return []
|
||||
session = get_requests_session("aniworld", headers=HEADERS)
|
||||
try:
|
||||
session.get(BASE_URL, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
|
||||
except Exception:
|
||||
pass
|
||||
data = _post_json(SEARCH_API_URL, payload={"keyword": query}, session=session)
|
||||
results: List[SeriesResult] = []
|
||||
seen: set[str] = set()
|
||||
if isinstance(data, list):
|
||||
for entry in data:
|
||||
if not isinstance(entry, dict):
|
||||
continue
|
||||
title = _strip_html((entry.get("title") or "").strip())
|
||||
if not title or not _matches_query(query, title=title):
|
||||
continue
|
||||
link = (entry.get("link") or "").strip()
|
||||
if not link.startswith("/anime/stream/"):
|
||||
continue
|
||||
if "/staffel-" in link or "/episode-" in link:
|
||||
continue
|
||||
if link.rstrip("/") == "/anime/stream":
|
||||
continue
|
||||
url = _absolute_url(link) if link else ""
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
key = title.casefold().strip()
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
description = (entry.get("description") or "").strip()
|
||||
results.append(SeriesResult(title=title, description=description, url=url))
|
||||
return results
|
||||
|
||||
soup = _get_soup_simple(SEARCH_URL.format(query=requests.utils.quote(query)))
|
||||
for anchor in soup.select("a[href^='/anime/stream/'][href]"):
|
||||
href = (anchor.get("href") or "").strip()
|
||||
if not href or "/staffel-" in href or "/episode-" in href:
|
||||
continue
|
||||
url = _absolute_url(href)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
title_node = anchor.select_one("h3") or anchor.select_one("strong")
|
||||
title = (title_node.get_text(" ", strip=True) if title_node else anchor.get_text(" ", strip=True)).strip()
|
||||
if not title:
|
||||
continue
|
||||
if not _matches_query(query, title=title):
|
||||
continue
|
||||
key = title.casefold().strip()
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
results.append(SeriesResult(title=title, description="", url=url))
|
||||
return results
|
||||
|
||||
|
||||
class AniworldPlugin(BasisPlugin):
|
||||
name = "AniWorld (aniworld.to)"
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._anime_results: Dict[str, SeriesResult] = {}
|
||||
self._season_cache: Dict[str, List[SeasonInfo]] = {}
|
||||
self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {}
|
||||
self._popular_cache: Optional[List[SeriesResult]] = None
|
||||
self._genre_cache: Optional[Dict[str, List[SeriesResult]]] = None
|
||||
self._latest_cache: Dict[int, List[LatestEpisode]] = {}
|
||||
self._latest_hoster_cache: Dict[str, List[str]] = {}
|
||||
self._requests_available = REQUESTS_AVAILABLE
|
||||
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
|
||||
self._preferred_hosters: List[str] = list(self._default_preferred_hosters)
|
||||
self._hoster_cache: Dict[Tuple[str, str, str], List[str]] = {}
|
||||
self.is_available = True
|
||||
self.unavailable_reason: Optional[str] = None
|
||||
if not self._requests_available: # pragma: no cover - optional dependency
|
||||
self.is_available = False
|
||||
self.unavailable_reason = "requests/bs4 fehlen. Installiere 'requests' und 'beautifulsoup4'."
|
||||
if REQUESTS_IMPORT_ERROR:
|
||||
print(f"AniworldPlugin Importfehler: {REQUESTS_IMPORT_ERROR}")
|
||||
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"popular_series", "genres", "latest_episodes"}
|
||||
|
||||
def _find_series_by_title(self, title: str) -> Optional[SeriesResult]:
|
||||
title = (title or "").strip()
|
||||
if not title:
|
||||
return None
|
||||
|
||||
direct = self._anime_results.get(title)
|
||||
if direct:
|
||||
return direct
|
||||
|
||||
wanted = title.casefold().strip()
|
||||
|
||||
for candidate in self._anime_results.values():
|
||||
if candidate.title and candidate.title.casefold().strip() == wanted:
|
||||
return candidate
|
||||
|
||||
try:
|
||||
for entry in self._ensure_popular():
|
||||
if entry.title and entry.title.casefold().strip() == wanted:
|
||||
self._anime_results[entry.title] = entry
|
||||
return entry
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
for entries in self._ensure_genres().values():
|
||||
for entry in entries:
|
||||
if entry.title and entry.title.casefold().strip() == wanted:
|
||||
self._anime_results[entry.title] = entry
|
||||
return entry
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
for entry in search_animes(title):
|
||||
if entry.title and entry.title.casefold().strip() == wanted:
|
||||
self._anime_results[entry.title] = entry
|
||||
return entry
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
def _ensure_popular(self) -> List[SeriesResult]:
|
||||
if self._popular_cache is not None:
|
||||
return list(self._popular_cache)
|
||||
soup = _get_soup_simple(POPULAR_ANIMES_URL)
|
||||
results: List[SeriesResult] = []
|
||||
seen: set[str] = set()
|
||||
for anchor in soup.select("div.seriesListContainer a[href^='/anime/stream/']"):
|
||||
href = (anchor.get("href") or "").strip()
|
||||
if not href or "/staffel-" in href or "/episode-" in href:
|
||||
continue
|
||||
url = _absolute_url(href)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
title_node = anchor.select_one("h3")
|
||||
title = (title_node.get_text(" ", strip=True) if title_node else "").strip()
|
||||
if not title:
|
||||
continue
|
||||
description = ""
|
||||
desc_node = anchor.select_one("small")
|
||||
if desc_node:
|
||||
description = desc_node.get_text(" ", strip=True).strip()
|
||||
key = title.casefold().strip()
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
results.append(SeriesResult(title=title, description=description, url=url))
|
||||
self._popular_cache = list(results)
|
||||
return list(results)
|
||||
|
||||
def popular_series(self) -> List[str]:
|
||||
if not self._requests_available:
|
||||
return []
|
||||
entries = self._ensure_popular()
|
||||
self._anime_results.update({entry.title: entry for entry in entries if entry.title})
|
||||
return [entry.title for entry in entries if entry.title]
|
||||
|
||||
def latest_episodes(self, page: int = 1) -> List[LatestEpisode]:
|
||||
if not self._requests_available:
|
||||
return []
|
||||
try:
|
||||
page = int(page or 1)
|
||||
except Exception:
|
||||
page = 1
|
||||
page = max(1, page)
|
||||
|
||||
cached = self._latest_cache.get(page)
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
|
||||
url = LATEST_EPISODES_URL
|
||||
if page > 1:
|
||||
url = f"{url}?page={page}"
|
||||
|
||||
soup = _get_soup_simple(url)
|
||||
episodes = _extract_latest_episodes(soup)
|
||||
self._latest_cache[page] = list(episodes)
|
||||
return list(episodes)
|
||||
|
||||
def _ensure_genres(self) -> Dict[str, List[SeriesResult]]:
|
||||
if self._genre_cache is not None:
|
||||
return {key: list(value) for key, value in self._genre_cache.items()}
|
||||
soup = _get_soup_simple(GENRES_URL)
|
||||
results: Dict[str, List[SeriesResult]] = {}
|
||||
genre_blocks = soup.select("#seriesContainer div.genre")
|
||||
if not genre_blocks:
|
||||
genre_blocks = soup.select("div.genre")
|
||||
for genre_block in genre_blocks:
|
||||
name_node = genre_block.select_one(".seriesGenreList h3")
|
||||
genre_name = (name_node.get_text(" ", strip=True) if name_node else "").strip()
|
||||
if not genre_name:
|
||||
continue
|
||||
entries: List[SeriesResult] = []
|
||||
seen: set[str] = set()
|
||||
for anchor in genre_block.select("ul li a[href]"):
|
||||
href = (anchor.get("href") or "").strip()
|
||||
if not href or "/staffel-" in href or "/episode-" in href:
|
||||
continue
|
||||
url = _absolute_url(href)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
title = (anchor.get_text(" ", strip=True) or "").strip()
|
||||
if not title:
|
||||
continue
|
||||
key = title.casefold().strip()
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
entries.append(SeriesResult(title=title, description="", url=url))
|
||||
if entries:
|
||||
results[genre_name] = entries
|
||||
self._genre_cache = {key: list(value) for key, value in results.items()}
|
||||
# Für spätere Auflösung (Seasons/Episoden) die Titel->URL Zuordnung auffüllen.
|
||||
for entries in results.values():
|
||||
for entry in entries:
|
||||
if not entry.title:
|
||||
continue
|
||||
if entry.title not in self._anime_results:
|
||||
self._anime_results[entry.title] = entry
|
||||
return {key: list(value) for key, value in results.items()}
|
||||
|
||||
def genres(self) -> List[str]:
|
||||
if not self._requests_available:
|
||||
return []
|
||||
genres = list(self._ensure_genres().keys())
|
||||
return [g for g in genres if g]
|
||||
|
||||
def titles_for_genre(self, genre: str) -> List[str]:
|
||||
genre = (genre or "").strip()
|
||||
if not genre or not self._requests_available:
|
||||
return []
|
||||
mapping = self._ensure_genres()
|
||||
entries = mapping.get(genre)
|
||||
if entries is None:
|
||||
wanted = genre.casefold()
|
||||
for key, value in mapping.items():
|
||||
if key.casefold() == wanted:
|
||||
entries = value
|
||||
break
|
||||
if not entries:
|
||||
return []
|
||||
# Zusätzlich sicherstellen, dass die Titel im Cache sind.
|
||||
self._anime_results.update({entry.title: entry for entry in entries if entry.title and entry.title not in self._anime_results})
|
||||
return [entry.title for entry in entries if entry.title]
|
||||
|
||||
def _season_label(self, number: int) -> str:
|
||||
return f"Staffel {number}"
|
||||
|
||||
def _parse_season_number(self, season_label: str) -> Optional[int]:
|
||||
match = re.search(DIGITS, season_label or "")
|
||||
return int(match.group(1)) if match else None
|
||||
|
||||
def _episode_label(self, info: EpisodeInfo) -> str:
|
||||
title = (info.title or "").strip()
|
||||
if title:
|
||||
return f"Episode {info.number} - {title}"
|
||||
return f"Episode {info.number}"
|
||||
|
||||
def _cache_episode_labels(self, title: str, season_label: str, season_info: SeasonInfo) -> None:
|
||||
cache_key = (title, season_label)
|
||||
self._episode_label_cache[cache_key] = {self._episode_label(info): info for info in season_info.episodes}
|
||||
|
||||
def _lookup_episode(self, title: str, season_label: str, episode_label: str) -> Optional[EpisodeInfo]:
|
||||
cache_key = (title, season_label)
|
||||
cached = self._episode_label_cache.get(cache_key)
|
||||
if cached:
|
||||
return cached.get(episode_label)
|
||||
seasons = self._ensure_seasons(title)
|
||||
number = self._parse_season_number(season_label)
|
||||
if number is None:
|
||||
return None
|
||||
for season_info in seasons:
|
||||
if season_info.number == number:
|
||||
self._cache_episode_labels(title, season_label, season_info)
|
||||
return self._episode_label_cache.get(cache_key, {}).get(episode_label)
|
||||
return None
|
||||
|
||||
async def search_titles(self, query: str) -> List[str]:
|
||||
query = (query or "").strip()
|
||||
if not query:
|
||||
self._anime_results.clear()
|
||||
self._season_cache.clear()
|
||||
self._episode_label_cache.clear()
|
||||
self._popular_cache = None
|
||||
return []
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 nicht suchen.")
|
||||
try:
|
||||
results = search_animes(query)
|
||||
except Exception as exc: # pragma: no cover
|
||||
self._anime_results.clear()
|
||||
self._season_cache.clear()
|
||||
self._episode_label_cache.clear()
|
||||
raise RuntimeError(f"AniWorld-Suche fehlgeschlagen: {exc}") from exc
|
||||
self._anime_results = {result.title: result for result in results}
|
||||
self._season_cache.clear()
|
||||
self._episode_label_cache.clear()
|
||||
return [result.title for result in results]
|
||||
|
||||
def _ensure_seasons(self, title: str) -> List[SeasonInfo]:
|
||||
if title in self._season_cache:
|
||||
return self._season_cache[title]
|
||||
anime = self._find_series_by_title(title)
|
||||
if not anime:
|
||||
return []
|
||||
seasons = scrape_anime_detail(anime.url)
|
||||
self._season_cache[title] = list(seasons)
|
||||
return list(seasons)
|
||||
|
||||
def seasons_for(self, title: str) -> List[str]:
|
||||
seasons = self._ensure_seasons(title)
|
||||
return [self._season_label(season.number) for season in seasons if season.episodes]
|
||||
|
||||
def episodes_for(self, title: str, season: str) -> List[str]:
|
||||
seasons = self._ensure_seasons(title)
|
||||
number = self._parse_season_number(season)
|
||||
if number is None:
|
||||
return []
|
||||
for season_info in seasons:
|
||||
if season_info.number == number:
|
||||
labels = [self._episode_label(info) for info in season_info.episodes]
|
||||
self._cache_episode_labels(title, season, season_info)
|
||||
return labels
|
||||
return []
|
||||
|
||||
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Stream-Links liefern.")
|
||||
episode_info = self._lookup_episode(title, season, episode)
|
||||
if not episode_info:
|
||||
return None
|
||||
link = fetch_episode_stream_link(episode_info.url, preferred_hosters=self._preferred_hosters)
|
||||
if link:
|
||||
_log_url(link, kind="FOUND")
|
||||
return link
|
||||
|
||||
def available_hosters_for(self, title: str, season: str, episode: str) -> List[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Hoster laden.")
|
||||
cache_key = (title, season, episode)
|
||||
cached = self._hoster_cache.get(cache_key)
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
episode_info = self._lookup_episode(title, season, episode)
|
||||
if not episode_info:
|
||||
return []
|
||||
names = fetch_episode_hoster_names(episode_info.url)
|
||||
self._hoster_cache[cache_key] = list(names)
|
||||
return list(names)
|
||||
|
||||
def available_hosters_for_url(self, episode_url: str) -> List[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Hoster laden.")
|
||||
normalized = _absolute_url(episode_url)
|
||||
cached = self._latest_hoster_cache.get(normalized)
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
names = fetch_episode_hoster_names(normalized)
|
||||
self._latest_hoster_cache[normalized] = list(names)
|
||||
return list(names)
|
||||
|
||||
def stream_link_for_url(self, episode_url: str) -> Optional[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Stream-Links liefern.")
|
||||
normalized = _absolute_url(episode_url)
|
||||
link = fetch_episode_stream_link(normalized, preferred_hosters=self._preferred_hosters)
|
||||
if link:
|
||||
_log_url(link, kind="FOUND")
|
||||
return link
|
||||
|
||||
def resolve_stream_link(self, link: str) -> Optional[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("AniworldPlugin kann ohne requests/bs4 keine Stream-Links aufloesen.")
|
||||
resolved = resolve_redirect(link)
|
||||
if not resolved:
|
||||
return None
|
||||
try:
|
||||
from resolveurl_backend import resolve as resolve_with_resolveurl
|
||||
except Exception:
|
||||
resolve_with_resolveurl = None
|
||||
if callable(resolve_with_resolveurl):
|
||||
resolved_by_resolveurl = resolve_with_resolveurl(resolved)
|
||||
if resolved_by_resolveurl:
|
||||
_log_url("ResolveURL", kind="HOSTER_RESOLVER")
|
||||
_log_url(resolved_by_resolveurl, kind="MEDIA")
|
||||
return resolved_by_resolveurl
|
||||
_log_url(resolved, kind="FINAL")
|
||||
return resolved
|
||||
|
||||
def set_preferred_hosters(self, hosters: List[str]) -> None:
|
||||
normalized = [hoster.strip().lower() for hoster in hosters if hoster.strip()]
|
||||
if normalized:
|
||||
self._preferred_hosters = normalized
|
||||
|
||||
def reset_preferred_hosters(self) -> None:
|
||||
self._preferred_hosters = list(self._default_preferred_hosters)
|
||||
|
||||
|
||||
Plugin = AniworldPlugin
|
||||
1052
dist/plugin.video.viewit/plugins/einschalten_plugin.py
vendored
1052
dist/plugin.video.viewit/plugins/einschalten_plugin.py
vendored
File diff suppressed because it is too large
Load Diff
@@ -1,966 +0,0 @@
|
||||
"""Serienstream (s.to) Integration als Downloader-Plugin.
|
||||
|
||||
Hinweise:
|
||||
- Diese Integration nutzt optional `requests` + `beautifulsoup4` (bs4).
|
||||
- In Kodi koennen zusaetzliche Debug-Funktionen ueber Addon-Settings aktiviert werden
|
||||
(URL-Logging, HTML-Dumps, Benachrichtigungen).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
import hashlib
|
||||
import os
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, TypeAlias
|
||||
|
||||
try: # pragma: no cover - optional dependency
|
||||
import requests
|
||||
from bs4 import BeautifulSoup # type: ignore[import-not-found]
|
||||
except ImportError as exc: # pragma: no cover - optional dependency
|
||||
requests = None
|
||||
BeautifulSoup = None
|
||||
REQUESTS_AVAILABLE = False
|
||||
REQUESTS_IMPORT_ERROR = exc
|
||||
else:
|
||||
REQUESTS_AVAILABLE = True
|
||||
REQUESTS_IMPORT_ERROR = None
|
||||
|
||||
try: # pragma: no cover - optional Kodi helpers
|
||||
import xbmcaddon # type: ignore[import-not-found]
|
||||
import xbmcvfs # type: ignore[import-not-found]
|
||||
import xbmcgui # type: ignore[import-not-found]
|
||||
except ImportError: # pragma: no cover - allow running outside Kodi
|
||||
xbmcaddon = None
|
||||
xbmcvfs = None
|
||||
xbmcgui = None
|
||||
|
||||
from plugin_interface import BasisPlugin
|
||||
from plugin_helpers import dump_response_html, get_setting_bool, log_url, notify_url
|
||||
from http_session_pool import get_requests_session
|
||||
from regex_patterns import SEASON_EPISODE_TAG, SEASON_EPISODE_URL
|
||||
|
||||
if TYPE_CHECKING: # pragma: no cover
|
||||
from requests import Session as RequestsSession
|
||||
from bs4 import BeautifulSoup as BeautifulSoupT # type: ignore[import-not-found]
|
||||
else: # pragma: no cover
|
||||
RequestsSession: TypeAlias = Any
|
||||
BeautifulSoupT: TypeAlias = Any
|
||||
|
||||
|
||||
BASE_URL = "https://s.to"
|
||||
SERIES_BASE_URL = f"{BASE_URL}/serie/stream"
|
||||
POPULAR_SERIES_URL = f"{BASE_URL}/beliebte-serien"
|
||||
LATEST_EPISODES_URL = f"{BASE_URL}"
|
||||
DEFAULT_PREFERRED_HOSTERS = ["voe"]
|
||||
DEFAULT_TIMEOUT = 20
|
||||
ADDON_ID = "plugin.video.viewit"
|
||||
GLOBAL_SETTING_LOG_URLS = "debug_log_urls"
|
||||
GLOBAL_SETTING_DUMP_HTML = "debug_dump_html"
|
||||
GLOBAL_SETTING_SHOW_URL_INFO = "debug_show_url_info"
|
||||
HEADERS = {
|
||||
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
|
||||
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
|
||||
"Accept-Language": "de-DE,de;q=0.9,en;q=0.8",
|
||||
"Connection": "keep-alive",
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class SeriesResult:
|
||||
title: str
|
||||
description: str
|
||||
url: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class EpisodeInfo:
|
||||
number: int
|
||||
title: str
|
||||
original_title: str
|
||||
url: str
|
||||
season_label: str = ""
|
||||
languages: List[str] = field(default_factory=list)
|
||||
hosters: List[str] = field(default_factory=list)
|
||||
|
||||
|
||||
@dataclass
|
||||
class LatestEpisode:
|
||||
series_title: str
|
||||
season: int
|
||||
episode: int
|
||||
url: str
|
||||
airdate: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class SeasonInfo:
|
||||
number: int
|
||||
url: str
|
||||
episodes: List[EpisodeInfo]
|
||||
|
||||
|
||||
def _absolute_url(href: str) -> str:
|
||||
return f"{BASE_URL}{href}" if href.startswith("/") else href
|
||||
|
||||
|
||||
def _normalize_series_url(identifier: str) -> str:
|
||||
if identifier.startswith("http://") or identifier.startswith("https://"):
|
||||
return identifier.rstrip("/")
|
||||
slug = identifier.strip("/")
|
||||
return f"{SERIES_BASE_URL}/{slug}"
|
||||
|
||||
|
||||
def _series_root_url(url: str) -> str:
|
||||
"""Normalisiert eine Serien-URL auf die Root-URL (ohne /staffel-x oder /episode-x)."""
|
||||
normalized = (url or "").strip().rstrip("/")
|
||||
normalized = re.sub(r"/staffel-\d+(?:/.*)?$", "", normalized)
|
||||
normalized = re.sub(r"/episode-\d+(?:/.*)?$", "", normalized)
|
||||
return normalized.rstrip("/")
|
||||
|
||||
|
||||
def _log_visit(url: str) -> None:
|
||||
_log_url(url, kind="VISIT")
|
||||
_notify_url(url)
|
||||
if xbmcaddon is None:
|
||||
print(f"Visiting: {url}")
|
||||
|
||||
|
||||
def _normalize_text(value: str) -> str:
|
||||
"""Legacy normalization (kept for backwards compatibility)."""
|
||||
value = value.casefold()
|
||||
value = re.sub(r"[^a-z0-9]+", "", value)
|
||||
return value
|
||||
|
||||
|
||||
def _normalize_search_text(value: str) -> str:
|
||||
"""Normalisiert Text für die Suche ohne Wortgrenzen zu "verschmelzen".
|
||||
|
||||
Wichtig: Wir ersetzen Nicht-Alphanumerisches durch Leerzeichen, statt es zu entfernen.
|
||||
Dadurch entstehen keine künstlichen Treffer über Wortgrenzen hinweg (z.B. "an" + "na" -> "anna").
|
||||
"""
|
||||
|
||||
value = (value or "").casefold()
|
||||
value = re.sub(r"[^a-z0-9]+", " ", value)
|
||||
value = re.sub(r"\s+", " ", value).strip()
|
||||
return value
|
||||
|
||||
|
||||
def _get_setting_bool(setting_id: str, *, default: bool = False) -> bool:
|
||||
return get_setting_bool(ADDON_ID, setting_id, default=default)
|
||||
|
||||
|
||||
def _notify_url(url: str) -> None:
|
||||
notify_url(ADDON_ID, heading="Serienstream", url=url, enabled_setting_id=GLOBAL_SETTING_SHOW_URL_INFO)
|
||||
|
||||
|
||||
def _log_url(url: str, *, kind: str = "VISIT") -> None:
|
||||
log_url(ADDON_ID, enabled_setting_id=GLOBAL_SETTING_LOG_URLS, log_filename="serienstream_urls.log", url=url, kind=kind)
|
||||
|
||||
|
||||
def _log_parsed_url(url: str) -> None:
|
||||
_log_url(url, kind="PARSE")
|
||||
|
||||
|
||||
def _log_response_html(url: str, body: str) -> None:
|
||||
dump_response_html(
|
||||
ADDON_ID,
|
||||
enabled_setting_id=GLOBAL_SETTING_DUMP_HTML,
|
||||
url=url,
|
||||
body=body,
|
||||
filename_prefix="s_to_response",
|
||||
)
|
||||
|
||||
|
||||
def _ensure_requests() -> None:
|
||||
if requests is None or BeautifulSoup is None:
|
||||
raise RuntimeError("requests/bs4 sind nicht verfuegbar.")
|
||||
|
||||
|
||||
def _looks_like_cloudflare_challenge(body: str) -> bool:
|
||||
lower = body.lower()
|
||||
markers = (
|
||||
"cf-browser-verification",
|
||||
"cf-challenge",
|
||||
"cf_chl",
|
||||
"challenge-platform",
|
||||
"attention required! | cloudflare",
|
||||
"just a moment...",
|
||||
"cloudflare ray id",
|
||||
)
|
||||
return any(marker in lower for marker in markers)
|
||||
|
||||
|
||||
def _get_soup(url: str, *, session: Optional[RequestsSession] = None) -> BeautifulSoupT:
|
||||
_ensure_requests()
|
||||
_log_visit(url)
|
||||
sess = session or get_requests_session("serienstream", headers=HEADERS)
|
||||
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
if response.url and response.url != url:
|
||||
_log_url(response.url, kind="REDIRECT")
|
||||
_log_response_html(url, response.text)
|
||||
if _looks_like_cloudflare_challenge(response.text):
|
||||
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
|
||||
return BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
|
||||
def _get_soup_simple(url: str) -> BeautifulSoupT:
|
||||
_ensure_requests()
|
||||
_log_visit(url)
|
||||
sess = get_requests_session("serienstream", headers=HEADERS)
|
||||
response = sess.get(url, headers=HEADERS, timeout=DEFAULT_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
if response.url and response.url != url:
|
||||
_log_url(response.url, kind="REDIRECT")
|
||||
_log_response_html(url, response.text)
|
||||
if _looks_like_cloudflare_challenge(response.text):
|
||||
raise RuntimeError("Cloudflare-Schutz erkannt. requests reicht ggf. nicht aus.")
|
||||
return BeautifulSoup(response.text, "html.parser")
|
||||
|
||||
|
||||
def search_series(query: str) -> List[SeriesResult]:
|
||||
"""Sucht Serien im (/serien)-Katalog (Genre-liste) nach Titel/Alt-Titel."""
|
||||
_ensure_requests()
|
||||
normalized_query = _normalize_search_text(query)
|
||||
if not normalized_query:
|
||||
return []
|
||||
# Direkter Abruf wie in fetch_serien.py.
|
||||
catalog_url = f"{BASE_URL}/serien?by=genre"
|
||||
soup = _get_soup_simple(catalog_url)
|
||||
results: List[SeriesResult] = []
|
||||
for series in parse_series_catalog(soup).values():
|
||||
for entry in series:
|
||||
haystack = _normalize_search_text(entry.title)
|
||||
if entry.title and normalized_query in haystack:
|
||||
results.append(entry)
|
||||
return results
|
||||
|
||||
|
||||
def parse_series_catalog(soup: BeautifulSoupT) -> Dict[str, List[SeriesResult]]:
|
||||
"""Parst die Serien-Übersicht (/serien) und liefert Genre -> Serienliste."""
|
||||
catalog: Dict[str, List[SeriesResult]] = {}
|
||||
|
||||
# Neues Layout (Stand: 2026-01): Gruppen-Header + Liste.
|
||||
# - Header: `div.background-1 ...` mit `h3`
|
||||
# - Einträge: `ul.series-list` -> `li.series-item[data-search]` -> `a[href]`
|
||||
for header in soup.select("div.background-1 h3"):
|
||||
group = (header.get_text(strip=True) or "").strip()
|
||||
if not group:
|
||||
continue
|
||||
list_node = header.parent.find_next_sibling("ul", class_="series-list")
|
||||
if not list_node:
|
||||
continue
|
||||
series: List[SeriesResult] = []
|
||||
for item in list_node.select("li.series-item"):
|
||||
anchor = item.find("a", href=True)
|
||||
if not anchor:
|
||||
continue
|
||||
href = (anchor.get("href") or "").strip()
|
||||
url = _absolute_url(href)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
if ("/serie/" not in url) or "/staffel-" in url or "/episode-" in url:
|
||||
continue
|
||||
title = (anchor.get_text(" ", strip=True) or "").strip()
|
||||
description = (item.get("data-search") or "").strip()
|
||||
if title:
|
||||
series.append(SeriesResult(title=title, description=description, url=url))
|
||||
if series:
|
||||
catalog[group] = series
|
||||
|
||||
return catalog
|
||||
|
||||
|
||||
def _extract_season_links(soup: BeautifulSoupT) -> List[Tuple[int, str]]:
|
||||
season_links: List[Tuple[int, str]] = []
|
||||
seen_numbers: set[int] = set()
|
||||
anchors = soup.select("ul.nav.list-items-nav a[data-season-pill][href]")
|
||||
for anchor in anchors:
|
||||
href = anchor.get("href") or ""
|
||||
if "/episode-" in href:
|
||||
continue
|
||||
data_number = (anchor.get("data-season-pill") or "").strip()
|
||||
match = re.search(r"/staffel-(\d+)", href)
|
||||
if match:
|
||||
number = int(match.group(1))
|
||||
elif data_number.isdigit():
|
||||
number = int(data_number)
|
||||
else:
|
||||
label = anchor.get_text(strip=True)
|
||||
if not label.isdigit():
|
||||
continue
|
||||
number = int(label)
|
||||
if number in seen_numbers:
|
||||
continue
|
||||
seen_numbers.add(number)
|
||||
season_url = _absolute_url(href)
|
||||
if season_url:
|
||||
_log_parsed_url(season_url)
|
||||
season_links.append((number, season_url))
|
||||
season_links.sort(key=lambda item: item[0])
|
||||
return season_links
|
||||
|
||||
|
||||
def _extract_number_of_seasons(soup: BeautifulSoupT) -> Optional[int]:
|
||||
tag = soup.select_one('meta[itemprop="numberOfSeasons"]')
|
||||
if not tag:
|
||||
return None
|
||||
content = (tag.get("content") or "").strip()
|
||||
if not content.isdigit():
|
||||
return None
|
||||
count = int(content)
|
||||
return count if count > 0 else None
|
||||
|
||||
|
||||
def _extract_canonical_url(soup: BeautifulSoupT, fallback: str) -> str:
|
||||
canonical = soup.select_one('link[rel="canonical"][href]')
|
||||
href = (canonical.get("href") if canonical else "") or ""
|
||||
href = href.strip()
|
||||
if href.startswith("http://") or href.startswith("https://"):
|
||||
return href.rstrip("/")
|
||||
return fallback.rstrip("/")
|
||||
|
||||
|
||||
def _extract_episodes(soup: BeautifulSoupT) -> List[EpisodeInfo]:
|
||||
episodes: List[EpisodeInfo] = []
|
||||
season_label = ""
|
||||
season_header = soup.select_one("section.episode-section h2") or soup.select_one("h2.h3")
|
||||
if season_header:
|
||||
season_label = (season_header.get_text(" ", strip=True) or "").strip()
|
||||
|
||||
language_map = {
|
||||
"german": "DE",
|
||||
"english": "EN",
|
||||
"japanese": "JP",
|
||||
"turkish": "TR",
|
||||
"spanish": "ES",
|
||||
"italian": "IT",
|
||||
"french": "FR",
|
||||
"korean": "KO",
|
||||
"russian": "RU",
|
||||
"polish": "PL",
|
||||
"portuguese": "PT",
|
||||
"chinese": "ZH",
|
||||
"arabic": "AR",
|
||||
"thai": "TH",
|
||||
}
|
||||
# Neues Layout (Stand: 2026-01): Episoden-Tabelle mit Zeilen und onclick-URL.
|
||||
rows = soup.select("table.episode-table tbody tr.episode-row")
|
||||
for index, row in enumerate(rows):
|
||||
onclick = (row.get("onclick") or "").strip()
|
||||
url = ""
|
||||
if onclick:
|
||||
match = re.search(r"location=['\\\"]([^'\\\"]+)['\\\"]", onclick)
|
||||
if match:
|
||||
url = _absolute_url(match.group(1))
|
||||
if not url:
|
||||
anchor = row.find("a", href=True)
|
||||
url = _absolute_url(anchor.get("href")) if anchor else ""
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
|
||||
number_tag = row.select_one(".episode-number-cell")
|
||||
number_text = (number_tag.get_text(strip=True) if number_tag else "").strip()
|
||||
match = re.search(r"/episode-(\d+)", url) if url else None
|
||||
if match:
|
||||
number = int(match.group(1))
|
||||
else:
|
||||
digits = "".join(ch for ch in number_text if ch.isdigit())
|
||||
number = int(digits) if digits else index + 1
|
||||
|
||||
title_tag = row.select_one(".episode-title-ger")
|
||||
original_tag = row.select_one(".episode-title-eng")
|
||||
title = (title_tag.get_text(strip=True) if title_tag else "").strip()
|
||||
original_title = (original_tag.get_text(strip=True) if original_tag else "").strip()
|
||||
if not title:
|
||||
title = f"Episode {number}"
|
||||
|
||||
hosters: List[str] = []
|
||||
for img in row.select(".episode-watch-cell img"):
|
||||
label = (img.get("alt") or img.get("title") or "").strip()
|
||||
if label and label not in hosters:
|
||||
hosters.append(label)
|
||||
|
||||
languages: List[str] = []
|
||||
for flag in row.select(".episode-language-cell .watch-language"):
|
||||
classes = flag.get("class") or []
|
||||
if isinstance(classes, str):
|
||||
classes = classes.split()
|
||||
for cls in classes:
|
||||
if cls.startswith("svg-flag-"):
|
||||
key = cls.replace("svg-flag-", "").strip()
|
||||
if not key:
|
||||
continue
|
||||
value = language_map.get(key, key.upper())
|
||||
if value and value not in languages:
|
||||
languages.append(value)
|
||||
|
||||
episodes.append(
|
||||
EpisodeInfo(
|
||||
number=number,
|
||||
title=title,
|
||||
original_title=original_title,
|
||||
url=url,
|
||||
season_label=season_label,
|
||||
languages=languages,
|
||||
hosters=hosters,
|
||||
)
|
||||
)
|
||||
if episodes:
|
||||
return episodes
|
||||
return episodes
|
||||
|
||||
|
||||
def fetch_episode_stream_link(
|
||||
episode_url: str,
|
||||
*,
|
||||
preferred_hosters: Optional[List[str]] = None,
|
||||
) -> Optional[str]:
|
||||
_ensure_requests()
|
||||
normalized_url = _absolute_url(episode_url)
|
||||
preferred = [hoster.lower() for hoster in (preferred_hosters or DEFAULT_PREFERRED_HOSTERS)]
|
||||
session = get_requests_session("serienstream", headers=HEADERS)
|
||||
# Preflight optional: Startseite kann 5xx liefern, Zielseite aber funktionieren.
|
||||
try:
|
||||
_get_soup(BASE_URL, session=session)
|
||||
except Exception:
|
||||
pass
|
||||
soup = _get_soup(normalized_url, session=session)
|
||||
candidates: List[Tuple[str, str]] = []
|
||||
for button in soup.select("button.link-box[data-play-url]"):
|
||||
play_url = (button.get("data-play-url") or "").strip()
|
||||
provider = (button.get("data-provider-name") or "").strip()
|
||||
url = _absolute_url(play_url)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
if provider and url:
|
||||
candidates.append((provider, url))
|
||||
if not candidates:
|
||||
return None
|
||||
for preferred_name in preferred:
|
||||
for name, url in candidates:
|
||||
if name.lower() == preferred_name:
|
||||
return url
|
||||
return candidates[0][1]
|
||||
|
||||
|
||||
def fetch_episode_hoster_names(episode_url: str) -> List[str]:
|
||||
"""Liest die verfügbaren Hoster-Namen für eine Episode aus."""
|
||||
_ensure_requests()
|
||||
normalized_url = _absolute_url(episode_url)
|
||||
session = get_requests_session("serienstream", headers=HEADERS)
|
||||
# Preflight optional: Startseite kann 5xx liefern, Zielseite aber funktionieren.
|
||||
try:
|
||||
_get_soup(BASE_URL, session=session)
|
||||
except Exception:
|
||||
pass
|
||||
soup = _get_soup(normalized_url, session=session)
|
||||
names: List[str] = []
|
||||
seen: set[str] = set()
|
||||
for button in soup.select("button.link-box[data-provider-name]"):
|
||||
name = (button.get("data-provider-name") or "").strip()
|
||||
play_url = (button.get("data-play-url") or "").strip()
|
||||
url = _absolute_url(play_url)
|
||||
if url:
|
||||
_log_parsed_url(url)
|
||||
key = name.casefold().strip()
|
||||
if not key or key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
names.append(name)
|
||||
_log_url(name, kind="HOSTER")
|
||||
if names:
|
||||
_log_url(f"{normalized_url}#hosters={','.join(names)}", kind="HOSTERS")
|
||||
return names
|
||||
|
||||
|
||||
_LATEST_EPISODE_TAG_RE = re.compile(SEASON_EPISODE_TAG, re.IGNORECASE)
|
||||
_LATEST_EPISODE_URL_RE = re.compile(SEASON_EPISODE_URL, re.IGNORECASE)
|
||||
|
||||
|
||||
def _extract_latest_episodes(soup: BeautifulSoupT) -> List[LatestEpisode]:
|
||||
"""Parst die neuesten Episoden von der Startseite."""
|
||||
episodes: List[LatestEpisode] = []
|
||||
seen: set[str] = set()
|
||||
|
||||
for anchor in soup.select("a.latest-episode-row[href]"):
|
||||
href = (anchor.get("href") or "").strip()
|
||||
if not href or "/serie/" not in href:
|
||||
continue
|
||||
url = _absolute_url(href)
|
||||
if not url:
|
||||
continue
|
||||
|
||||
title_node = anchor.select_one(".ep-title")
|
||||
series_title = (title_node.get("title") if title_node else "") or ""
|
||||
series_title = series_title.strip() or (title_node.get_text(strip=True) if title_node else "").strip()
|
||||
if not series_title:
|
||||
continue
|
||||
|
||||
season_text = (anchor.select_one(".ep-season").get_text(strip=True) if anchor.select_one(".ep-season") else "").strip()
|
||||
episode_text = (anchor.select_one(".ep-episode").get_text(strip=True) if anchor.select_one(".ep-episode") else "").strip()
|
||||
season_number: Optional[int] = None
|
||||
episode_number: Optional[int] = None
|
||||
match = re.search(r"S\\s*(\\d+)", season_text, re.IGNORECASE)
|
||||
if match:
|
||||
season_number = int(match.group(1))
|
||||
match = re.search(r"E\\s*(\\d+)", episode_text, re.IGNORECASE)
|
||||
if match:
|
||||
episode_number = int(match.group(1))
|
||||
if season_number is None or episode_number is None:
|
||||
match = _LATEST_EPISODE_URL_RE.search(href)
|
||||
if match:
|
||||
season_number = int(match.group(1))
|
||||
episode_number = int(match.group(2))
|
||||
if season_number is None or episode_number is None:
|
||||
continue
|
||||
|
||||
airdate_node = anchor.select_one(".ep-time")
|
||||
airdate = (airdate_node.get_text(" ", strip=True) if airdate_node else "").strip()
|
||||
|
||||
key = f"{url}\\t{season_number}\\t{episode_number}"
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
|
||||
_log_parsed_url(url)
|
||||
episodes.append(
|
||||
LatestEpisode(
|
||||
series_title=series_title,
|
||||
season=int(season_number),
|
||||
episode=int(episode_number),
|
||||
url=url,
|
||||
airdate=airdate,
|
||||
)
|
||||
)
|
||||
|
||||
return episodes
|
||||
|
||||
|
||||
def resolve_redirect(target_url: str) -> Optional[str]:
|
||||
_ensure_requests()
|
||||
normalized_url = _absolute_url(target_url)
|
||||
_log_visit(normalized_url)
|
||||
session = get_requests_session("serienstream", headers=HEADERS)
|
||||
# Preflight optional: Startseite kann 5xx liefern, Zielseite aber funktionieren.
|
||||
try:
|
||||
_get_soup(BASE_URL, session=session)
|
||||
except Exception:
|
||||
pass
|
||||
response = session.get(
|
||||
normalized_url,
|
||||
headers=HEADERS,
|
||||
timeout=DEFAULT_TIMEOUT,
|
||||
allow_redirects=True,
|
||||
)
|
||||
if response.url:
|
||||
_log_url(response.url, kind="RESOLVED")
|
||||
return response.url if response.url else None
|
||||
|
||||
|
||||
def scrape_series_detail(
|
||||
series_identifier: str,
|
||||
max_seasons: Optional[int] = None,
|
||||
) -> List[SeasonInfo]:
|
||||
_ensure_requests()
|
||||
series_url = _series_root_url(_normalize_series_url(series_identifier))
|
||||
_log_url(series_url, kind="SERIES")
|
||||
_notify_url(series_url)
|
||||
session = get_requests_session("serienstream", headers=HEADERS)
|
||||
# Preflight ist optional; manche Umgebungen/Provider leiten die Startseite um.
|
||||
try:
|
||||
_get_soup(BASE_URL, session=session)
|
||||
except Exception:
|
||||
pass
|
||||
soup = _get_soup(series_url, session=session)
|
||||
|
||||
base_series_url = _series_root_url(_extract_canonical_url(soup, series_url))
|
||||
season_links = _extract_season_links(soup)
|
||||
season_count = _extract_number_of_seasons(soup)
|
||||
if season_count and (not season_links or len(season_links) < season_count):
|
||||
existing = {number for number, _ in season_links}
|
||||
for number in range(1, season_count + 1):
|
||||
if number in existing:
|
||||
continue
|
||||
season_url = f"{base_series_url}/staffel-{number}"
|
||||
_log_parsed_url(season_url)
|
||||
season_links.append((number, season_url))
|
||||
season_links.sort(key=lambda item: item[0])
|
||||
if max_seasons is not None:
|
||||
season_links = season_links[:max_seasons]
|
||||
seasons: List[SeasonInfo] = []
|
||||
for number, url in season_links:
|
||||
season_soup = _get_soup(url, session=session)
|
||||
episodes = _extract_episodes(season_soup)
|
||||
seasons.append(SeasonInfo(number=number, url=url, episodes=episodes))
|
||||
seasons.sort(key=lambda s: s.number)
|
||||
return seasons
|
||||
|
||||
|
||||
class SerienstreamPlugin(BasisPlugin):
|
||||
"""Downloader-Plugin, das Serien von s.to ueber requests/bs4 bereitstellt."""
|
||||
|
||||
name = "Serienstream (s.to)"
|
||||
POPULAR_GENRE_LABEL = "⭐ Beliebte Serien"
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._series_results: Dict[str, SeriesResult] = {}
|
||||
self._season_cache: Dict[str, List[SeasonInfo]] = {}
|
||||
self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {}
|
||||
self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None
|
||||
self._popular_cache: Optional[List[SeriesResult]] = None
|
||||
self._requests_available = REQUESTS_AVAILABLE
|
||||
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
|
||||
self._preferred_hosters: List[str] = list(self._default_preferred_hosters)
|
||||
self._hoster_cache: Dict[Tuple[str, str, str], List[str]] = {}
|
||||
self._latest_cache: Dict[int, List[LatestEpisode]] = {}
|
||||
self._latest_hoster_cache: Dict[str, List[str]] = {}
|
||||
self.is_available = True
|
||||
self.unavailable_reason: Optional[str] = None
|
||||
if not self._requests_available: # pragma: no cover - optional dependency
|
||||
self.is_available = False
|
||||
self.unavailable_reason = (
|
||||
"requests/bs4 fehlen. Installiere 'requests' und 'beautifulsoup4'."
|
||||
)
|
||||
print(
|
||||
"SerienstreamPlugin deaktiviert: requests/bs4 fehlen. "
|
||||
"Installiere 'requests' und 'beautifulsoup4'."
|
||||
)
|
||||
if REQUESTS_IMPORT_ERROR:
|
||||
print(f"Importfehler: {REQUESTS_IMPORT_ERROR}")
|
||||
return
|
||||
|
||||
def _ensure_catalog(self) -> Dict[str, List[SeriesResult]]:
|
||||
if self._catalog_cache is not None:
|
||||
return self._catalog_cache
|
||||
# Stand: 2026-01 liefert `?by=genre` konsistente Gruppen für `genres()`.
|
||||
catalog_url = f"{BASE_URL}/serien?by=genre"
|
||||
soup = _get_soup_simple(catalog_url)
|
||||
self._catalog_cache = parse_series_catalog(soup)
|
||||
return self._catalog_cache
|
||||
|
||||
def genres(self) -> List[str]:
|
||||
"""Optional: Liefert alle Genres aus dem Serien-Katalog."""
|
||||
if not self._requests_available:
|
||||
return []
|
||||
catalog = self._ensure_catalog()
|
||||
return sorted(catalog.keys(), key=str.casefold)
|
||||
|
||||
def capabilities(self) -> set[str]:
|
||||
"""Meldet unterstützte Features für Router-Menüs."""
|
||||
return {"popular_series", "genres", "latest_episodes"}
|
||||
|
||||
def popular_series(self) -> List[str]:
|
||||
"""Liefert die Titel der beliebten Serien (Quelle: `/beliebte-serien`)."""
|
||||
if not self._requests_available:
|
||||
return []
|
||||
entries = self._ensure_popular()
|
||||
self._series_results.update({entry.title: entry for entry in entries if entry.title})
|
||||
return [entry.title for entry in entries if entry.title]
|
||||
|
||||
def titles_for_genre(self, genre: str) -> List[str]:
|
||||
"""Optional: Liefert Titel für ein Genre."""
|
||||
if not self._requests_available:
|
||||
return []
|
||||
genre = (genre or "").strip()
|
||||
if not genre:
|
||||
return []
|
||||
if genre == self.POPULAR_GENRE_LABEL:
|
||||
return self.popular_series()
|
||||
catalog = self._ensure_catalog()
|
||||
entries = catalog.get(genre, [])
|
||||
self._series_results.update({entry.title: entry for entry in entries if entry.title})
|
||||
return [entry.title for entry in entries if entry.title]
|
||||
|
||||
def _ensure_popular(self) -> List[SeriesResult]:
|
||||
"""Laedt und cached die Liste der beliebten Serien aus `/beliebte-serien`."""
|
||||
if self._popular_cache is not None:
|
||||
return list(self._popular_cache)
|
||||
soup = _get_soup_simple(POPULAR_SERIES_URL)
|
||||
results: List[SeriesResult] = []
|
||||
seen: set[str] = set()
|
||||
|
||||
# Neues Layout (Stand: 2026-01): Abschnitt "Meistgesehen" hat Karten mit
|
||||
# `a.show-card` und Titel im `img alt=...`.
|
||||
anchors = None
|
||||
for section in soup.select("div.mb-5"):
|
||||
h2 = section.select_one("h2")
|
||||
label = (h2.get_text(" ", strip=True) if h2 else "").casefold()
|
||||
if "meistgesehen" in label:
|
||||
anchors = section.select("a.show-card[href]")
|
||||
break
|
||||
if anchors is None:
|
||||
anchors = soup.select("a.show-card[href]")
|
||||
|
||||
for anchor in anchors:
|
||||
href = (anchor.get("href") or "").strip()
|
||||
if not href or "/serie/" not in href:
|
||||
continue
|
||||
img = anchor.select_one("img[alt]")
|
||||
title = ((img.get("alt") if img else "") or "").strip()
|
||||
if not title or title in seen:
|
||||
continue
|
||||
url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
|
||||
url = re.sub(r"/staffel-\\d+(?:/.*)?$", "", url).rstrip("/")
|
||||
if not url:
|
||||
continue
|
||||
_log_parsed_url(url)
|
||||
seen.add(title)
|
||||
results.append(SeriesResult(title=title, description="", url=url))
|
||||
|
||||
|
||||
self._popular_cache = list(results)
|
||||
return list(results)
|
||||
|
||||
@staticmethod
|
||||
def _season_label(number: int) -> str:
|
||||
return f"Staffel {number}"
|
||||
|
||||
@staticmethod
|
||||
def _episode_label(info: EpisodeInfo) -> str:
|
||||
suffix_parts: List[str] = []
|
||||
if info.original_title:
|
||||
suffix_parts.append(info.original_title)
|
||||
# Staffel nicht im Episoden-Label anzeigen (wird im UI bereits gesetzt).
|
||||
suffix = f" ({' | '.join(suffix_parts)})" if suffix_parts else ""
|
||||
|
||||
return f"Episode {info.number}: {info.title}{suffix}"
|
||||
|
||||
@staticmethod
|
||||
def _parse_season_number(label: str) -> Optional[int]:
|
||||
digits = "".join(ch for ch in label if ch.isdigit())
|
||||
if not digits:
|
||||
return None
|
||||
return int(digits)
|
||||
|
||||
def _clear_episode_cache_for_title(self, title: str) -> None:
|
||||
keys_to_remove = [key for key in self._episode_label_cache if key[0] == title]
|
||||
for key in keys_to_remove:
|
||||
self._episode_label_cache.pop(key, None)
|
||||
keys_to_remove = [key for key in self._hoster_cache if key[0] == title]
|
||||
for key in keys_to_remove:
|
||||
self._hoster_cache.pop(key, None)
|
||||
|
||||
def _cache_episode_labels(self, title: str, season_label: str, season_info: SeasonInfo) -> None:
|
||||
cache_key = (title, season_label)
|
||||
self._episode_label_cache[cache_key] = {
|
||||
self._episode_label(info): info for info in season_info.episodes
|
||||
}
|
||||
|
||||
def _lookup_episode(self, title: str, season_label: str, episode_label: str) -> Optional[EpisodeInfo]:
|
||||
cache_key = (title, season_label)
|
||||
cached = self._episode_label_cache.get(cache_key)
|
||||
if cached:
|
||||
return cached.get(episode_label)
|
||||
|
||||
seasons = self._ensure_seasons(title)
|
||||
number = self._parse_season_number(season_label)
|
||||
if number is None:
|
||||
return None
|
||||
|
||||
for season_info in seasons:
|
||||
if season_info.number == number:
|
||||
self._cache_episode_labels(title, season_label, season_info)
|
||||
return self._episode_label_cache.get(cache_key, {}).get(episode_label)
|
||||
return None
|
||||
|
||||
async def search_titles(self, query: str) -> List[str]:
|
||||
query = query.strip()
|
||||
if not query:
|
||||
self._series_results.clear()
|
||||
self._season_cache.clear()
|
||||
self._episode_label_cache.clear()
|
||||
self._catalog_cache = None
|
||||
return []
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.")
|
||||
try:
|
||||
# Nutzt den Katalog (/serien), der jetzt nach Genres gruppiert ist.
|
||||
# Alternativ gäbe es ein Ajax-Endpoint, aber der ist nicht immer zuverlässig erreichbar.
|
||||
results = search_series(query)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
self._series_results.clear()
|
||||
self._season_cache.clear()
|
||||
self._episode_label_cache.clear()
|
||||
self._catalog_cache = None
|
||||
raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc
|
||||
self._series_results = {result.title: result for result in results}
|
||||
self._season_cache.clear()
|
||||
self._episode_label_cache.clear()
|
||||
return [result.title for result in results]
|
||||
|
||||
def _ensure_seasons(self, title: str) -> List[SeasonInfo]:
|
||||
if title in self._season_cache:
|
||||
seasons = self._season_cache[title]
|
||||
# Auch bei Cache-Treffern die URLs loggen, damit nachvollziehbar bleibt,
|
||||
# welche Seiten für Staffel-/Episodenlisten relevant sind.
|
||||
if _get_setting_bool(GLOBAL_SETTING_LOG_URLS, default=False):
|
||||
series = self._series_results.get(title)
|
||||
if series and series.url:
|
||||
_log_url(series.url, kind="CACHE")
|
||||
for season in seasons:
|
||||
if season.url:
|
||||
_log_url(season.url, kind="CACHE")
|
||||
return seasons
|
||||
series = self._series_results.get(title)
|
||||
if not series:
|
||||
# Kodi startet das Plugin pro Navigation neu -> Such-Cache im RAM geht verloren.
|
||||
# Daher den Titel erneut im Katalog auflösen, um die Serien-URL zu bekommen.
|
||||
catalog = self._ensure_catalog()
|
||||
lookup_key = title.casefold().strip()
|
||||
for entries in catalog.values():
|
||||
for entry in entries:
|
||||
if entry.title.casefold().strip() == lookup_key:
|
||||
series = entry
|
||||
self._series_results[entry.title] = entry
|
||||
break
|
||||
if series:
|
||||
break
|
||||
if not series:
|
||||
return []
|
||||
try:
|
||||
seasons = scrape_series_detail(series.url)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
raise RuntimeError(f"Serienstream-Staffeln konnten nicht geladen werden: {exc}") from exc
|
||||
self._clear_episode_cache_for_title(title)
|
||||
self._season_cache[title] = seasons
|
||||
return seasons
|
||||
|
||||
def seasons_for(self, title: str) -> List[str]:
|
||||
seasons = self._ensure_seasons(title)
|
||||
# Serienstream liefert gelegentlich Staffeln ohne Episoden (z.B. Parsing-/Layoutwechsel).
|
||||
# Diese sollen im UI nicht als auswählbarer Menüpunkt erscheinen.
|
||||
return [self._season_label(season.number) for season in seasons if season.episodes]
|
||||
|
||||
def episodes_for(self, title: str, season: str) -> List[str]:
|
||||
seasons = self._ensure_seasons(title)
|
||||
number = self._parse_season_number(season)
|
||||
if number is None:
|
||||
return []
|
||||
for season_info in seasons:
|
||||
if season_info.number == number:
|
||||
labels = [self._episode_label(info) for info in season_info.episodes]
|
||||
self._cache_episode_labels(title, season, season_info)
|
||||
return labels
|
||||
return []
|
||||
|
||||
def stream_link_for(self, title: str, season: str, episode: str) -> Optional[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 keine Stream-Links liefern.")
|
||||
episode_info = self._lookup_episode(title, season, episode)
|
||||
if not episode_info:
|
||||
return None
|
||||
try:
|
||||
link = fetch_episode_stream_link(
|
||||
episode_info.url,
|
||||
preferred_hosters=self._preferred_hosters,
|
||||
)
|
||||
if link:
|
||||
_log_url(link, kind="FOUND")
|
||||
return link
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
raise RuntimeError(f"Stream-Link konnte nicht geladen werden: {exc}") from exc
|
||||
|
||||
def available_hosters_for(self, title: str, season: str, episode: str) -> List[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 keine Hoster laden.")
|
||||
cache_key = (title, season, episode)
|
||||
cached = self._hoster_cache.get(cache_key)
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
|
||||
episode_info = self._lookup_episode(title, season, episode)
|
||||
if not episode_info:
|
||||
return []
|
||||
try:
|
||||
names = fetch_episode_hoster_names(episode_info.url)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
raise RuntimeError(f"Hoster konnten nicht geladen werden: {exc}") from exc
|
||||
self._hoster_cache[cache_key] = list(names)
|
||||
return list(names)
|
||||
|
||||
def latest_episodes(self, page: int = 1) -> List[LatestEpisode]:
|
||||
"""Liefert die neuesten Episoden aus `/neue-episoden`."""
|
||||
if not self._requests_available:
|
||||
return []
|
||||
try:
|
||||
page = int(page or 1)
|
||||
except Exception:
|
||||
page = 1
|
||||
page = max(1, page)
|
||||
cached = self._latest_cache.get(page)
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
|
||||
url = LATEST_EPISODES_URL
|
||||
if page > 1:
|
||||
url = f"{url}?page={page}"
|
||||
soup = _get_soup_simple(url)
|
||||
episodes = _extract_latest_episodes(soup)
|
||||
self._latest_cache[page] = list(episodes)
|
||||
return list(episodes)
|
||||
|
||||
def available_hosters_for_url(self, episode_url: str) -> List[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 keine Hoster laden.")
|
||||
normalized = _absolute_url(episode_url)
|
||||
cached = self._latest_hoster_cache.get(normalized)
|
||||
if cached is not None:
|
||||
return list(cached)
|
||||
try:
|
||||
names = fetch_episode_hoster_names(normalized)
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
raise RuntimeError(f"Hoster konnten nicht geladen werden: {exc}") from exc
|
||||
self._latest_hoster_cache[normalized] = list(names)
|
||||
return list(names)
|
||||
|
||||
def stream_link_for_url(self, episode_url: str) -> Optional[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 keine Stream-Links liefern.")
|
||||
normalized = _absolute_url(episode_url)
|
||||
try:
|
||||
link = fetch_episode_stream_link(
|
||||
normalized,
|
||||
preferred_hosters=self._preferred_hosters,
|
||||
)
|
||||
if link:
|
||||
_log_url(link, kind="FOUND")
|
||||
return link
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
raise RuntimeError(f"Stream-Link konnte nicht geladen werden: {exc}") from exc
|
||||
|
||||
def resolve_stream_link(self, link: str) -> Optional[str]:
|
||||
if not self._requests_available:
|
||||
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 keine Stream-Links aufloesen.")
|
||||
try:
|
||||
resolved = resolve_redirect(link)
|
||||
if not resolved:
|
||||
return None
|
||||
try:
|
||||
from resolveurl_backend import resolve as resolve_with_resolveurl
|
||||
except Exception:
|
||||
resolve_with_resolveurl = None
|
||||
if callable(resolve_with_resolveurl):
|
||||
resolved_by_resolveurl = resolve_with_resolveurl(resolved)
|
||||
if resolved_by_resolveurl:
|
||||
_log_url("ResolveURL", kind="HOSTER_RESOLVER")
|
||||
_log_url(resolved_by_resolveurl, kind="MEDIA")
|
||||
return resolved_by_resolveurl
|
||||
_log_url(resolved, kind="FINAL")
|
||||
return resolved
|
||||
except Exception as exc: # pragma: no cover - defensive logging
|
||||
raise RuntimeError(f"Stream-Link konnte nicht verfolgt werden: {exc}") from exc
|
||||
|
||||
def set_preferred_hosters(self, hosters: List[str]) -> None:
|
||||
normalized = [hoster.strip().lower() for hoster in hosters if hoster.strip()]
|
||||
if normalized:
|
||||
self._preferred_hosters = normalized
|
||||
|
||||
def reset_preferred_hosters(self) -> None:
|
||||
self._preferred_hosters = list(self._default_preferred_hosters)
|
||||
|
||||
|
||||
# Alias für die automatische Plugin-Erkennung.
|
||||
Plugin = SerienstreamPlugin
|
||||
1027
dist/plugin.video.viewit/plugins/topstreamfilm_plugin.py
vendored
1027
dist/plugin.video.viewit/plugins/topstreamfilm_plugin.py
vendored
File diff suppressed because it is too large
Load Diff
11
dist/plugin.video.viewit/regex_patterns.py
vendored
11
dist/plugin.video.viewit/regex_patterns.py
vendored
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Shared regex pattern constants.
|
||||
|
||||
Keep common patterns in one place to avoid accidental double-escaping (e.g. \"\\\\d\").
|
||||
"""
|
||||
|
||||
SEASON_EPISODE_TAG = r"S\s*(\d+)\s*E\s*(\d+)"
|
||||
SEASON_EPISODE_URL = r"/staffel-(\d+)/episode-(\d+)"
|
||||
STAFFEL_NUM_IN_URL = r"/staffel-(\d+)"
|
||||
DIGITS = r"(\d+)"
|
||||
|
||||
2
dist/plugin.video.viewit/requirements.txt
vendored
2
dist/plugin.video.viewit/requirements.txt
vendored
@@ -1,2 +0,0 @@
|
||||
beautifulsoup4>=4.12
|
||||
requests>=2.31
|
||||
43
dist/plugin.video.viewit/resolveurl_backend.py
vendored
43
dist/plugin.video.viewit/resolveurl_backend.py
vendored
@@ -1,43 +0,0 @@
|
||||
"""Optionales ResolveURL-Backend für das Kodi-Addon.
|
||||
|
||||
Wenn `script.module.resolveurl` installiert ist, kann damit eine Hoster-URL
|
||||
zu einer abspielbaren Media-URL (inkl. evtl. Header-Suffix) aufgelöst werden.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def resolve(url: str) -> Optional[str]:
|
||||
if not url:
|
||||
return None
|
||||
try:
|
||||
import resolveurl # type: ignore
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
try:
|
||||
hosted = getattr(resolveurl, "HostedMediaFile", None)
|
||||
if callable(hosted):
|
||||
hmf = hosted(url)
|
||||
valid = getattr(hmf, "valid_url", None)
|
||||
if callable(valid) and not valid():
|
||||
return None
|
||||
resolver = getattr(hmf, "resolve", None)
|
||||
if callable(resolver):
|
||||
result = resolver()
|
||||
return str(result) if result else None
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
resolve_fn = getattr(resolveurl, "resolve", None)
|
||||
if callable(resolve_fn):
|
||||
result = resolve_fn(url)
|
||||
return str(result) if result else None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
BIN
dist/plugin.video.viewit/resources/logo.png
vendored
BIN
dist/plugin.video.viewit/resources/logo.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 970 KiB |
36
dist/plugin.video.viewit/resources/settings.xml
vendored
36
dist/plugin.video.viewit/resources/settings.xml
vendored
@@ -1,36 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<settings>
|
||||
<category label="Allgemein">
|
||||
<setting id="debug_log_urls" type="bool" label="Debug: URL-Log aktivieren (global)" default="false" />
|
||||
<setting id="debug_dump_html" type="bool" label="Debug: HTML-Antworten speichern (global)" default="false" />
|
||||
<setting id="debug_show_url_info" type="bool" label="Debug: Aufgerufene URL anzeigen (global)" default="false" />
|
||||
</category>
|
||||
<category label="TopStream">
|
||||
<setting id="topstream_base_url" type="text" label="Basis-URL (z.B. https://www.meineseite)" default="https://www.meineseite" />
|
||||
<setting id="topstream_genre_max_pages" type="number" label="Genres: max. Seiten laden (Pagination)" default="20" />
|
||||
</category>
|
||||
<category label="Einschalten">
|
||||
<setting id="einschalten_base_url" type="text" label="Basis-URL (nur eigene/autorisiert betriebene Quelle)" default="" />
|
||||
<setting id="einschalten_index_path" type="text" label="Index-Pfad (z.B. /)" default="/" />
|
||||
<setting id="einschalten_new_titles_path" type="text" label="Neue-Titel-Pfad (z.B. /movies/new)" default="/movies/new" />
|
||||
<setting id="einschalten_search_path" type="text" label="Suche-Pfad (z.B. /search)" default="/search" />
|
||||
<setting id="einschalten_genres_path" type="text" label="Genres-Pfad (z.B. /genres)" default="/genres" />
|
||||
<setting id="einschalten_enable_playback" type="bool" label="Wiedergabe aktivieren (nur autorisierte Quellen)" default="false" />
|
||||
<setting id="einschalten_watch_path_template" type="text" label="Watch-Pfad-Template (z.B. /api/movies/{id}/watch)" default="/api/movies/{id}/watch" />
|
||||
</category>
|
||||
<category label="TMDB">
|
||||
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
|
||||
<setting id="tmdb_language" type="text" label="TMDB Sprache (z.B. de-DE)" default="de-DE" />
|
||||
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: Parallelität (Prefetch, 1-20)" default="6" />
|
||||
<setting id="tmdb_show_plot" type="bool" label="TMDB Plot anzeigen" default="true" />
|
||||
<setting id="tmdb_show_art" type="bool" label="TMDB Poster/Thumb anzeigen" default="true" />
|
||||
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
|
||||
<setting id="tmdb_show_rating" type="bool" label="TMDB Rating anzeigen" default="true" />
|
||||
<setting id="tmdb_show_votes" type="bool" label="TMDB Vote-Count anzeigen" default="false" />
|
||||
<setting id="tmdb_show_cast" type="bool" label="TMDB Cast anzeigen" default="false" />
|
||||
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
|
||||
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Meta in Genre-Liste anzeigen" default="false" />
|
||||
<setting id="tmdb_log_requests" type="bool" label="TMDB API Requests loggen" default="false" />
|
||||
<setting id="tmdb_log_responses" type="bool" label="TMDB API Antworten loggen" default="false" />
|
||||
</category>
|
||||
</settings>
|
||||
652
dist/plugin.video.viewit/tmdb.py
vendored
652
dist/plugin.video.viewit/tmdb.py
vendored
@@ -1,652 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
import json
|
||||
import threading
|
||||
from typing import Callable, Dict, List, Optional, Tuple
|
||||
from urllib.parse import urlencode
|
||||
|
||||
try: # pragma: no cover - optional dependency
|
||||
import requests
|
||||
except ImportError: # pragma: no cover
|
||||
requests = None
|
||||
|
||||
|
||||
TMDB_API_BASE = "https://api.themoviedb.org/3"
|
||||
TMDB_IMAGE_BASE = "https://image.tmdb.org/t/p"
|
||||
_TMDB_THREAD_LOCAL = threading.local()
|
||||
|
||||
|
||||
def _get_tmdb_session() -> "requests.Session | None":
|
||||
"""Returns a per-thread shared requests Session.
|
||||
|
||||
We use thread-local storage because ViewIt prefetches TMDB metadata using threads.
|
||||
`requests.Session` is not guaranteed to be thread-safe, but reusing a session within
|
||||
the same thread keeps connections warm.
|
||||
"""
|
||||
|
||||
if requests is None:
|
||||
return None
|
||||
sess = getattr(_TMDB_THREAD_LOCAL, "session", None)
|
||||
if sess is None:
|
||||
sess = requests.Session()
|
||||
setattr(_TMDB_THREAD_LOCAL, "session", sess)
|
||||
return sess
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TmdbCastMember:
|
||||
name: str
|
||||
role: str
|
||||
thumb: str
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TmdbShowMeta:
|
||||
tmdb_id: int
|
||||
plot: str
|
||||
poster: str
|
||||
fanart: str
|
||||
rating: float
|
||||
votes: int
|
||||
cast: List[TmdbCastMember]
|
||||
|
||||
|
||||
def _image_url(path: str, *, size: str) -> str:
|
||||
path = (path or "").strip()
|
||||
if not path:
|
||||
return ""
|
||||
return f"{TMDB_IMAGE_BASE}/{size}{path}"
|
||||
|
||||
|
||||
def _fetch_credits(
|
||||
*,
|
||||
kind: str,
|
||||
tmdb_id: int,
|
||||
api_key: str,
|
||||
language: str,
|
||||
timeout: int,
|
||||
log: Callable[[str], None] | None,
|
||||
log_responses: bool,
|
||||
) -> List[TmdbCastMember]:
|
||||
if requests is None or not tmdb_id:
|
||||
return []
|
||||
params = {"api_key": api_key, "language": (language or "de-DE").strip()}
|
||||
url = f"{TMDB_API_BASE}/{kind}/{tmdb_id}/credits?{urlencode(params)}"
|
||||
if callable(log):
|
||||
log(f"TMDB GET {url}")
|
||||
try:
|
||||
response = requests.get(url, timeout=timeout)
|
||||
except Exception as exc: # pragma: no cover
|
||||
if callable(log):
|
||||
log(f"TMDB ERROR /{kind}/{{id}}/credits request_failed error={exc!r}")
|
||||
return []
|
||||
status = getattr(response, "status_code", None)
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /{kind}/{{id}}/credits status={status}")
|
||||
if status != 200:
|
||||
return []
|
||||
try:
|
||||
payload = response.json() or {}
|
||||
except Exception:
|
||||
return []
|
||||
if callable(log) and log_responses:
|
||||
try:
|
||||
dumped = json.dumps(payload, ensure_ascii=False)
|
||||
except Exception:
|
||||
dumped = str(payload)
|
||||
log(f"TMDB RESPONSE_BODY /{kind}/{{id}}/credits body={dumped[:2000]}")
|
||||
|
||||
cast_payload = payload.get("cast") or []
|
||||
if callable(log):
|
||||
log(f"TMDB CREDITS /{kind}/{{id}}/credits cast={len(cast_payload)}")
|
||||
with_images: List[TmdbCastMember] = []
|
||||
without_images: List[TmdbCastMember] = []
|
||||
for entry in cast_payload:
|
||||
name = (entry.get("name") or "").strip()
|
||||
role = (entry.get("character") or "").strip()
|
||||
thumb = _image_url(entry.get("profile_path") or "", size="w185")
|
||||
if not name:
|
||||
continue
|
||||
member = TmdbCastMember(name=name, role=role, thumb=thumb)
|
||||
if thumb:
|
||||
with_images.append(member)
|
||||
else:
|
||||
without_images.append(member)
|
||||
|
||||
# Viele Kodi-Skins zeigen bei fehlendem Thumbnail Platzhalter-Köpfe.
|
||||
# Bevorzugt daher Cast-Einträge mit Bild; nur wenn gar keine Bilder existieren,
|
||||
# geben wir Namen ohne Bild zurück.
|
||||
if with_images:
|
||||
return with_images[:30]
|
||||
return without_images[:30]
|
||||
|
||||
|
||||
def _parse_cast_payload(cast_payload: object) -> List[TmdbCastMember]:
|
||||
if not isinstance(cast_payload, list):
|
||||
return []
|
||||
with_images: List[TmdbCastMember] = []
|
||||
without_images: List[TmdbCastMember] = []
|
||||
for entry in cast_payload:
|
||||
if not isinstance(entry, dict):
|
||||
continue
|
||||
name = (entry.get("name") or "").strip()
|
||||
role = (entry.get("character") or "").strip()
|
||||
thumb = _image_url(entry.get("profile_path") or "", size="w185")
|
||||
if not name:
|
||||
continue
|
||||
member = TmdbCastMember(name=name, role=role, thumb=thumb)
|
||||
if thumb:
|
||||
with_images.append(member)
|
||||
else:
|
||||
without_images.append(member)
|
||||
if with_images:
|
||||
return with_images[:30]
|
||||
return without_images[:30]
|
||||
|
||||
|
||||
def _tmdb_get_json(
|
||||
*,
|
||||
url: str,
|
||||
timeout: int,
|
||||
log: Callable[[str], None] | None,
|
||||
log_responses: bool,
|
||||
session: "requests.Session | None" = None,
|
||||
) -> Tuple[int | None, object | None, str]:
|
||||
"""Fetches TMDB JSON with optional shared session.
|
||||
|
||||
Returns: (status_code, payload_or_none, body_text_or_empty)
|
||||
"""
|
||||
|
||||
if requests is None:
|
||||
return None, None, ""
|
||||
if callable(log):
|
||||
log(f"TMDB GET {url}")
|
||||
sess = session or _get_tmdb_session() or requests.Session()
|
||||
try:
|
||||
response = sess.get(url, timeout=timeout)
|
||||
except Exception as exc: # pragma: no cover
|
||||
if callable(log):
|
||||
log(f"TMDB ERROR request_failed url={url} error={exc!r}")
|
||||
return None, None, ""
|
||||
|
||||
status = getattr(response, "status_code", None)
|
||||
payload: object | None = None
|
||||
body_text = ""
|
||||
try:
|
||||
payload = response.json()
|
||||
except Exception:
|
||||
try:
|
||||
body_text = (response.text or "").strip()
|
||||
except Exception:
|
||||
body_text = ""
|
||||
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE status={status} url={url}")
|
||||
if log_responses:
|
||||
if payload is not None:
|
||||
try:
|
||||
dumped = json.dumps(payload, ensure_ascii=False)
|
||||
except Exception:
|
||||
dumped = str(payload)
|
||||
log(f"TMDB RESPONSE_BODY url={url} body={dumped[:2000]}")
|
||||
elif body_text:
|
||||
log(f"TMDB RESPONSE_BODY url={url} body={body_text[:2000]}")
|
||||
return status, payload, body_text
|
||||
|
||||
|
||||
def fetch_tv_episode_credits(
|
||||
*,
|
||||
tmdb_id: int,
|
||||
season_number: int,
|
||||
episode_number: int,
|
||||
api_key: str,
|
||||
language: str = "de-DE",
|
||||
timeout: int = 15,
|
||||
log: Callable[[str], None] | None = None,
|
||||
log_responses: bool = False,
|
||||
) -> List[TmdbCastMember]:
|
||||
"""Lädt Cast für eine konkrete Episode (/tv/{id}/season/{n}/episode/{e}/credits)."""
|
||||
if requests is None:
|
||||
return []
|
||||
api_key = (api_key or "").strip()
|
||||
if not api_key or not tmdb_id:
|
||||
return []
|
||||
params = {"api_key": api_key, "language": (language or "de-DE").strip()}
|
||||
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}/episode/{episode_number}/credits?{urlencode(params)}"
|
||||
if callable(log):
|
||||
log(f"TMDB GET {url}")
|
||||
try:
|
||||
response = requests.get(url, timeout=timeout)
|
||||
except Exception as exc: # pragma: no cover
|
||||
if callable(log):
|
||||
log(f"TMDB ERROR /tv/{{id}}/season/{{n}}/episode/{{e}}/credits request_failed error={exc!r}")
|
||||
return []
|
||||
status = getattr(response, "status_code", None)
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}}/episode/{{e}}/credits status={status}")
|
||||
if status != 200:
|
||||
return []
|
||||
try:
|
||||
payload = response.json() or {}
|
||||
except Exception:
|
||||
return []
|
||||
if callable(log) and log_responses:
|
||||
try:
|
||||
dumped = json.dumps(payload, ensure_ascii=False)
|
||||
except Exception:
|
||||
dumped = str(payload)
|
||||
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}}/episode/{{e}}/credits body={dumped[:2000]}")
|
||||
|
||||
cast_payload = payload.get("cast") or []
|
||||
if callable(log):
|
||||
log(f"TMDB CREDITS /tv/{{id}}/season/{{n}}/episode/{{e}}/credits cast={len(cast_payload)}")
|
||||
with_images: List[TmdbCastMember] = []
|
||||
without_images: List[TmdbCastMember] = []
|
||||
for entry in cast_payload:
|
||||
name = (entry.get("name") or "").strip()
|
||||
role = (entry.get("character") or "").strip()
|
||||
thumb = _image_url(entry.get("profile_path") or "", size="w185")
|
||||
if not name:
|
||||
continue
|
||||
member = TmdbCastMember(name=name, role=role, thumb=thumb)
|
||||
if thumb:
|
||||
with_images.append(member)
|
||||
else:
|
||||
without_images.append(member)
|
||||
if with_images:
|
||||
return with_images[:30]
|
||||
return without_images[:30]
|
||||
|
||||
|
||||
def lookup_tv_show(
|
||||
*,
|
||||
title: str,
|
||||
api_key: str,
|
||||
language: str = "de-DE",
|
||||
timeout: int = 15,
|
||||
log: Callable[[str], None] | None = None,
|
||||
log_responses: bool = False,
|
||||
include_cast: bool = False,
|
||||
) -> Optional[TmdbShowMeta]:
|
||||
"""Sucht eine TV-Show bei TMDB und liefert Plot + Poster-URL (wenn vorhanden)."""
|
||||
if requests is None:
|
||||
return None
|
||||
api_key = (api_key or "").strip()
|
||||
if not api_key:
|
||||
return None
|
||||
query = (title or "").strip()
|
||||
if not query:
|
||||
return None
|
||||
|
||||
params = {
|
||||
"api_key": api_key,
|
||||
"language": (language or "de-DE").strip(),
|
||||
"query": query,
|
||||
"include_adult": "false",
|
||||
"page": "1",
|
||||
}
|
||||
url = f"{TMDB_API_BASE}/search/tv?{urlencode(params)}"
|
||||
status, payload, body_text = _tmdb_get_json(
|
||||
url=url,
|
||||
timeout=timeout,
|
||||
log=log,
|
||||
log_responses=log_responses,
|
||||
)
|
||||
results = (payload or {}).get("results") if isinstance(payload, dict) else []
|
||||
results = results or []
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /search/tv status={status} results={len(results)}")
|
||||
if log_responses and payload is None and body_text:
|
||||
log(f"TMDB RESPONSE_BODY /search/tv body={body_text[:2000]}")
|
||||
|
||||
if status != 200:
|
||||
return None
|
||||
if not results:
|
||||
return None
|
||||
|
||||
normalized_query = query.casefold()
|
||||
best = None
|
||||
for candidate in results:
|
||||
name = (candidate.get("name") or "").casefold()
|
||||
original_name = (candidate.get("original_name") or "").casefold()
|
||||
if name == normalized_query or original_name == normalized_query:
|
||||
best = candidate
|
||||
break
|
||||
if best is None:
|
||||
best = results[0]
|
||||
|
||||
tmdb_id = int(best.get("id") or 0)
|
||||
plot = (best.get("overview") or "").strip()
|
||||
poster = _image_url(best.get("poster_path") or "", size="w342")
|
||||
fanart = _image_url(best.get("backdrop_path") or "", size="w780")
|
||||
try:
|
||||
rating = float(best.get("vote_average") or 0.0)
|
||||
except Exception:
|
||||
rating = 0.0
|
||||
try:
|
||||
votes = int(best.get("vote_count") or 0)
|
||||
except Exception:
|
||||
votes = 0
|
||||
if not tmdb_id:
|
||||
return None
|
||||
cast: List[TmdbCastMember] = []
|
||||
if include_cast and tmdb_id:
|
||||
detail_params = {
|
||||
"api_key": api_key,
|
||||
"language": (language or "de-DE").strip(),
|
||||
"append_to_response": "credits",
|
||||
}
|
||||
detail_url = f"{TMDB_API_BASE}/tv/{tmdb_id}?{urlencode(detail_params)}"
|
||||
d_status, d_payload, d_body = _tmdb_get_json(
|
||||
url=detail_url,
|
||||
timeout=timeout,
|
||||
log=log,
|
||||
log_responses=log_responses,
|
||||
)
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /tv/{{id}} status={d_status}")
|
||||
if log_responses and d_payload is None and d_body:
|
||||
log(f"TMDB RESPONSE_BODY /tv/{{id}} body={d_body[:2000]}")
|
||||
if d_status == 200 and isinstance(d_payload, dict):
|
||||
credits = d_payload.get("credits") or {}
|
||||
cast = _parse_cast_payload((credits or {}).get("cast"))
|
||||
if not plot and not poster and not fanart and not rating and not votes and not cast:
|
||||
return None
|
||||
return TmdbShowMeta(
|
||||
tmdb_id=tmdb_id,
|
||||
plot=plot,
|
||||
poster=poster,
|
||||
fanart=fanart,
|
||||
rating=rating,
|
||||
votes=votes,
|
||||
cast=cast,
|
||||
)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TmdbMovieMeta:
|
||||
tmdb_id: int
|
||||
plot: str
|
||||
poster: str
|
||||
fanart: str
|
||||
runtime_minutes: int
|
||||
rating: float
|
||||
votes: int
|
||||
cast: List[TmdbCastMember]
|
||||
|
||||
|
||||
def _fetch_movie_details(
|
||||
*,
|
||||
tmdb_id: int,
|
||||
api_key: str,
|
||||
language: str,
|
||||
timeout: int,
|
||||
log: Callable[[str], None] | None,
|
||||
log_responses: bool,
|
||||
include_cast: bool,
|
||||
) -> Tuple[int, List[TmdbCastMember]]:
|
||||
"""Fetches /movie/{id} and (optionally) bundles credits via append_to_response=credits."""
|
||||
if requests is None or not tmdb_id:
|
||||
return 0, []
|
||||
api_key = (api_key or "").strip()
|
||||
if not api_key:
|
||||
return 0, []
|
||||
params: Dict[str, str] = {
|
||||
"api_key": api_key,
|
||||
"language": (language or "de-DE").strip(),
|
||||
}
|
||||
if include_cast:
|
||||
params["append_to_response"] = "credits"
|
||||
url = f"{TMDB_API_BASE}/movie/{tmdb_id}?{urlencode(params)}"
|
||||
status, payload, body_text = _tmdb_get_json(url=url, timeout=timeout, log=log, log_responses=log_responses)
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /movie/{{id}} status={status}")
|
||||
if log_responses and payload is None and body_text:
|
||||
log(f"TMDB RESPONSE_BODY /movie/{{id}} body={body_text[:2000]}")
|
||||
if status != 200 or not isinstance(payload, dict):
|
||||
return 0, []
|
||||
try:
|
||||
runtime = int(payload.get("runtime") or 0)
|
||||
except Exception:
|
||||
runtime = 0
|
||||
cast: List[TmdbCastMember] = []
|
||||
if include_cast:
|
||||
credits = payload.get("credits") or {}
|
||||
cast = _parse_cast_payload((credits or {}).get("cast"))
|
||||
return runtime, cast
|
||||
|
||||
|
||||
def lookup_movie(
|
||||
*,
|
||||
title: str,
|
||||
api_key: str,
|
||||
language: str = "de-DE",
|
||||
timeout: int = 15,
|
||||
log: Callable[[str], None] | None = None,
|
||||
log_responses: bool = False,
|
||||
include_cast: bool = False,
|
||||
) -> Optional[TmdbMovieMeta]:
|
||||
"""Sucht einen Film bei TMDB und liefert Plot + Poster-URL (wenn vorhanden)."""
|
||||
if requests is None:
|
||||
return None
|
||||
api_key = (api_key or "").strip()
|
||||
if not api_key:
|
||||
return None
|
||||
query = (title or "").strip()
|
||||
if not query:
|
||||
return None
|
||||
|
||||
params = {
|
||||
"api_key": api_key,
|
||||
"language": (language or "de-DE").strip(),
|
||||
"query": query,
|
||||
"include_adult": "false",
|
||||
"page": "1",
|
||||
}
|
||||
url = f"{TMDB_API_BASE}/search/movie?{urlencode(params)}"
|
||||
status, payload, body_text = _tmdb_get_json(
|
||||
url=url,
|
||||
timeout=timeout,
|
||||
log=log,
|
||||
log_responses=log_responses,
|
||||
)
|
||||
results = (payload or {}).get("results") if isinstance(payload, dict) else []
|
||||
results = results or []
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /search/movie status={status} results={len(results)}")
|
||||
if log_responses and payload is None and body_text:
|
||||
log(f"TMDB RESPONSE_BODY /search/movie body={body_text[:2000]}")
|
||||
|
||||
if status != 200:
|
||||
return None
|
||||
if not results:
|
||||
return None
|
||||
|
||||
normalized_query = query.casefold()
|
||||
best = None
|
||||
for candidate in results:
|
||||
name = (candidate.get("title") or "").casefold()
|
||||
original_name = (candidate.get("original_title") or "").casefold()
|
||||
if name == normalized_query or original_name == normalized_query:
|
||||
best = candidate
|
||||
break
|
||||
if best is None:
|
||||
best = results[0]
|
||||
|
||||
tmdb_id = int(best.get("id") or 0)
|
||||
plot = (best.get("overview") or "").strip()
|
||||
poster = _image_url(best.get("poster_path") or "", size="w342")
|
||||
fanart = _image_url(best.get("backdrop_path") or "", size="w780")
|
||||
runtime_minutes = 0
|
||||
try:
|
||||
rating = float(best.get("vote_average") or 0.0)
|
||||
except Exception:
|
||||
rating = 0.0
|
||||
try:
|
||||
votes = int(best.get("vote_count") or 0)
|
||||
except Exception:
|
||||
votes = 0
|
||||
if not tmdb_id:
|
||||
return None
|
||||
cast: List[TmdbCastMember] = []
|
||||
runtime_minutes, cast = _fetch_movie_details(
|
||||
tmdb_id=tmdb_id,
|
||||
api_key=api_key,
|
||||
language=language,
|
||||
timeout=timeout,
|
||||
log=log,
|
||||
log_responses=log_responses,
|
||||
include_cast=include_cast,
|
||||
)
|
||||
if not plot and not poster and not fanart and not rating and not votes and not cast:
|
||||
return None
|
||||
return TmdbMovieMeta(
|
||||
tmdb_id=tmdb_id,
|
||||
plot=plot,
|
||||
poster=poster,
|
||||
fanart=fanart,
|
||||
runtime_minutes=runtime_minutes,
|
||||
rating=rating,
|
||||
votes=votes,
|
||||
cast=cast,
|
||||
)
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TmdbEpisodeMeta:
|
||||
plot: str
|
||||
thumb: str
|
||||
runtime_minutes: int
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TmdbSeasonMeta:
|
||||
plot: str
|
||||
poster: str
|
||||
|
||||
|
||||
def lookup_tv_season_summary(
|
||||
*,
|
||||
tmdb_id: int,
|
||||
season_number: int,
|
||||
api_key: str,
|
||||
language: str = "de-DE",
|
||||
timeout: int = 15,
|
||||
log: Callable[[str], None] | None = None,
|
||||
log_responses: bool = False,
|
||||
) -> Optional[TmdbSeasonMeta]:
|
||||
"""Lädt Staffel-Meta (Plot + Poster)."""
|
||||
if requests is None:
|
||||
return None
|
||||
|
||||
api_key = (api_key or "").strip()
|
||||
if not api_key or not tmdb_id:
|
||||
return None
|
||||
|
||||
params = {"api_key": api_key, "language": (language or "de-DE").strip()}
|
||||
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}"
|
||||
if callable(log):
|
||||
log(f"TMDB GET {url}")
|
||||
try:
|
||||
response = requests.get(url, timeout=timeout)
|
||||
except Exception:
|
||||
return None
|
||||
status = getattr(response, "status_code", None)
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status}")
|
||||
if status != 200:
|
||||
return None
|
||||
try:
|
||||
payload = response.json() or {}
|
||||
except Exception:
|
||||
return None
|
||||
if callable(log) and log_responses:
|
||||
try:
|
||||
dumped = json.dumps(payload, ensure_ascii=False)
|
||||
except Exception:
|
||||
dumped = str(payload)
|
||||
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={dumped[:2000]}")
|
||||
|
||||
plot = (payload.get("overview") or "").strip()
|
||||
poster_path = (payload.get("poster_path") or "").strip()
|
||||
poster = f"{TMDB_IMAGE_BASE}/w342{poster_path}" if poster_path else ""
|
||||
if not plot and not poster:
|
||||
return None
|
||||
return TmdbSeasonMeta(plot=plot, poster=poster)
|
||||
|
||||
|
||||
def lookup_tv_season(
|
||||
*,
|
||||
tmdb_id: int,
|
||||
season_number: int,
|
||||
api_key: str,
|
||||
language: str = "de-DE",
|
||||
timeout: int = 15,
|
||||
log: Callable[[str], None] | None = None,
|
||||
log_responses: bool = False,
|
||||
) -> Optional[Dict[int, TmdbEpisodeMeta]]:
|
||||
"""Lädt Episoden-Meta für eine Staffel: episode_number -> (plot, thumb)."""
|
||||
if requests is None:
|
||||
return None
|
||||
api_key = (api_key or "").strip()
|
||||
if not api_key or not tmdb_id or season_number is None:
|
||||
return None
|
||||
params = {"api_key": api_key, "language": (language or "de-DE").strip()}
|
||||
url = f"{TMDB_API_BASE}/tv/{tmdb_id}/season/{season_number}?{urlencode(params)}"
|
||||
if callable(log):
|
||||
log(f"TMDB GET {url}")
|
||||
try:
|
||||
response = requests.get(url, timeout=timeout)
|
||||
except Exception as exc: # pragma: no cover
|
||||
if callable(log):
|
||||
log(f"TMDB ERROR /tv/{{id}}/season/{{n}} request_failed error={exc!r}")
|
||||
return None
|
||||
|
||||
status = getattr(response, "status_code", None)
|
||||
payload = None
|
||||
body_text = ""
|
||||
try:
|
||||
payload = response.json() or {}
|
||||
except Exception:
|
||||
try:
|
||||
body_text = (response.text or "").strip()
|
||||
except Exception:
|
||||
body_text = ""
|
||||
|
||||
episodes = (payload or {}).get("episodes") or []
|
||||
if callable(log):
|
||||
log(f"TMDB RESPONSE /tv/{{id}}/season/{{n}} status={status} episodes={len(episodes)}")
|
||||
if log_responses:
|
||||
if payload is not None:
|
||||
try:
|
||||
dumped = json.dumps(payload, ensure_ascii=False)
|
||||
except Exception:
|
||||
dumped = str(payload)
|
||||
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={dumped[:2000]}")
|
||||
elif body_text:
|
||||
log(f"TMDB RESPONSE_BODY /tv/{{id}}/season/{{n}} body={body_text[:2000]}")
|
||||
|
||||
if status != 200 or not episodes:
|
||||
return None
|
||||
|
||||
result: Dict[int, TmdbEpisodeMeta] = {}
|
||||
for entry in episodes:
|
||||
try:
|
||||
ep_number = int(entry.get("episode_number") or 0)
|
||||
except Exception:
|
||||
continue
|
||||
if not ep_number:
|
||||
continue
|
||||
plot = (entry.get("overview") or "").strip()
|
||||
runtime_minutes = 0
|
||||
try:
|
||||
runtime_minutes = int(entry.get("runtime") or 0)
|
||||
except Exception:
|
||||
runtime_minutes = 0
|
||||
still_path = (entry.get("still_path") or "").strip()
|
||||
thumb = f"{TMDB_IMAGE_BASE}/w300{still_path}" if still_path else ""
|
||||
if not plot and not thumb and not runtime_minutes:
|
||||
continue
|
||||
result[ep_number] = TmdbEpisodeMeta(plot=plot, thumb=thumb, runtime_minutes=runtime_minutes)
|
||||
return result or None
|
||||
Reference in New Issue
Block a user