mirror of
https://github.com/anonymousx97/social-dl.git
synced 2025-02-20 11:13:19 +08:00
RedditDL improved, Download mutiple media in async and some bug fixes.
This commit is contained in:
parent
09262818a2
commit
e355eab67f
52
README.md
52
README.md
@ -1,10 +1,32 @@
|
|||||||
# Light weight Instagram DL bot.
|
### Light weight Social Media downloader bot.
|
||||||
|
* Supported Platforms:
|
||||||
|
* Videos: Instagram, Tiktok, Twitter, YouTube Shorts
|
||||||
|
* Images: Instagram, Reddit
|
||||||
|
* Gif : Reddit
|
||||||
|
|
||||||
# Deploy:
|
### Usage and Commands:
|
||||||
|
* Send supported links in any authorised chat/channel, bot will try to download and send the media.
|
||||||
|
* Owner only commands:
|
||||||
|
* `.dl link` to download and send media in any chat.
|
||||||
|
* `.bot update` to refresh chat list without restarting bot.
|
||||||
|
* `.bot restart` to restart bot.
|
||||||
|
* `.bot ids` to get chat / channel / user IDs.
|
||||||
|
* `.bot join or leave` to join / leave chat using ID.
|
||||||
|
* `.del` reply to a message to delete.
|
||||||
|
* `.term` to run shell commands in bot. Example: `.term ls`
|
||||||
|
* These commands can be used anywhere and are not limited to authorised chats.
|
||||||
|
|
||||||
|
### Deploy:
|
||||||
|
* For android local deploy:
|
||||||
|
* Download Latest [Termux](https://github.com/termux/termux-app/releases).
|
||||||
|
```bash
|
||||||
|
# Update local packages after installing Termux.
|
||||||
|
yes|apt update && yes|apt upgrade
|
||||||
|
```
|
||||||
|
|
||||||
* Config:
|
* Config:
|
||||||
* Get API_ID and API_HASH from https://my.telegram.org/auth .
|
* Get API_ID and API_HASH from https://my.telegram.org/auth .
|
||||||
* Generate String Session by running this in termux:
|
* Generate String Session by running this in Termux:
|
||||||
```bash
|
```bash
|
||||||
bash -c "$(curl -fsSL https://raw.githubusercontent.com/ux-termux/string/main/Termux.sh)"
|
bash -c "$(curl -fsSL https://raw.githubusercontent.com/ux-termux/string/main/Termux.sh)"
|
||||||
```
|
```
|
||||||
@ -26,13 +48,6 @@
|
|||||||
* User : Your user id to control bot.
|
* User : Your user id to control bot.
|
||||||
* Trigger : Trigger to access bot.
|
* Trigger : Trigger to access bot.
|
||||||
|
|
||||||
|
|
||||||
* Download Latest [Termux](https://github.com/termux/termux-app/releases).
|
|
||||||
```bash
|
|
||||||
# Update local packages after installing Termux.
|
|
||||||
yes|apt update && yes|apt upgrade
|
|
||||||
```
|
|
||||||
|
|
||||||
* Run the following commands:
|
* Run the following commands:
|
||||||
```bash
|
```bash
|
||||||
# Install required packages.
|
# Install required packages.
|
||||||
@ -57,25 +72,14 @@
|
|||||||
|
|
||||||
* If everything is correct you will get <b><i>Started</i></b> stdout in terminal and in your channel.
|
* If everything is correct you will get <b><i>Started</i></b> stdout in terminal and in your channel.
|
||||||
|
|
||||||
# Usage and Commands:
|
|
||||||
* Send Instagram link in any authorised chat/channel, bot will try to download and send the media.
|
|
||||||
* Owner only commands:
|
|
||||||
* `.dl link` to download and send media in any chat.
|
|
||||||
* `.rdl` to download media from reddit.
|
|
||||||
* `.bot update` to refresh chat list without restarting bot.
|
|
||||||
* `.bot restart` to restart bot.
|
|
||||||
* `.bot ids` to get chat / channel / user IDs.
|
|
||||||
* `.bot join or leave` to join / leave chat using ID.
|
|
||||||
* `.del` to delete message.
|
|
||||||
* These commands can be used anywhere and are not limited to authorised chats.
|
|
||||||
|
|
||||||
# Known limitations:
|
### Known limitations:
|
||||||
* If deployed on a VPS or any server Instragram might block access to some content.
|
* If deployed on a VPS or any server Instragram might block access to some content.
|
||||||
After hitting Instagram's rate limit image download might not work because servers and vps usually have static IP and Instagram would block access.
|
After hitting Instagram's rate limit image download might not work because servers and vps usually have static IP and Instagram would block access.
|
||||||
* Deploying it locally would solve all of those issues since most of us have dynamic IP and Instagram will not be able to block access.
|
* Deploying it locally would solve all of those issues because most of us are likely to have dynamic IP so Instagram will not be able to block access.
|
||||||
Bot is made lightweight with local deploys in mind. But battery life will take some hit anyway.
|
Bot is made lightweight with local deploys in mind. But battery life will take some hit anyway.
|
||||||
* Logging in with your Instagram which would solve the rate-limit issues is not added and won't be added because 2 of my accounts were suspended till manual verification for using scrapping bots like these with login.
|
* Logging in with your Instagram which would solve the rate-limit issues is not added and won't be added because 2 of my accounts were suspended till manual verification for using scrapping bots like these with login.
|
||||||
|
|
||||||
# Contact
|
### Contact
|
||||||
* For any questions related to deploy or issues contact me on
|
* For any questions related to deploy or issues contact me on
|
||||||
[Telegram](https://t.me/anonymousx97)
|
[Telegram](https://t.me/anonymousx97)
|
||||||
|
349
socialbot.py
349
socialbot.py
@ -1,5 +1,6 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
import base64
|
import base64
|
||||||
|
import glob
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
@ -13,7 +14,7 @@ import yt_dlp
|
|||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
from pyrogram import Client, filters, idle
|
from pyrogram import Client, filters, idle
|
||||||
from pyrogram.enums import ChatType
|
from pyrogram.enums import ChatType
|
||||||
from pyrogram.errors import MediaEmpty, PhotoSaveFileInvalid, WebpageCurlFailed, PeerIdInvalid
|
from pyrogram.errors import MediaEmpty, PeerIdInvalid, PhotoSaveFileInvalid, WebpageCurlFailed
|
||||||
from pyrogram.handlers import MessageHandler
|
from pyrogram.handlers import MessageHandler
|
||||||
from pyrogram.types import InputMediaPhoto, InputMediaVideo, Message
|
from pyrogram.types import InputMediaPhoto, InputMediaVideo, Message
|
||||||
from wget import download
|
from wget import download
|
||||||
@ -21,14 +22,9 @@ from wget import download
|
|||||||
if os.path.isfile("config.env"):
|
if os.path.isfile("config.env"):
|
||||||
load_dotenv("config.env")
|
load_dotenv("config.env")
|
||||||
|
|
||||||
bot = Client(
|
bot = Client(name="bot", session_string=os.environ.get("STRING_SESSION"), api_id=os.environ.get("API_ID"), api_hash=os.environ.get("API_HASH"))
|
||||||
name="bot",
|
|
||||||
session_string=os.environ.get("STRING_SESSION"),
|
|
||||||
api_id=os.environ.get("API_ID"),
|
|
||||||
api_hash=os.environ.get("API_HASH"),
|
|
||||||
)
|
|
||||||
log_chat = os.environ.get("LOG")
|
log_chat = os.environ.get("LOG")
|
||||||
if log_chat == None:
|
if log_chat is None:
|
||||||
print("Enter log channel id in config")
|
print("Enter log channel id in config")
|
||||||
exit()
|
exit()
|
||||||
chat_list = []
|
chat_list = []
|
||||||
@ -94,16 +90,12 @@ async def multi_func(bot, message: Message):
|
|||||||
await bot.send_message(chat_id=log_chat, text=str(traceback.format_exc()))
|
await bot.send_message(chat_id=log_chat, text=str(traceback.format_exc()))
|
||||||
|
|
||||||
|
|
||||||
@bot.on_message(
|
@bot.on_message(filters.command(commands="term", prefixes=trigger) & filters.user(users))
|
||||||
filters.command(commands="term", prefixes=trigger) & filters.user(users)
|
|
||||||
)
|
|
||||||
async def run_cmd(bot, message: Message):
|
async def run_cmd(bot, message: Message):
|
||||||
"""Function to run shell commands"""
|
"""Function to run shell commands"""
|
||||||
cmd = message.text.replace("+term", "")
|
cmd = message.text.replace("+term", "")
|
||||||
status_ = await message.reply("executing...")
|
status_ = await message.reply("executing...")
|
||||||
process = await asyncio.create_subprocess_shell(
|
process = await asyncio.create_subprocess_shell(cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE)
|
||||||
cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
|
|
||||||
)
|
|
||||||
stdout, stderr = await process.communicate()
|
stdout, stderr = await process.communicate()
|
||||||
|
|
||||||
if process.returncode is not None:
|
if process.returncode is not None:
|
||||||
@ -126,15 +118,14 @@ async def delete_message(bot, message: Message):
|
|||||||
|
|
||||||
@bot.on_message(filters.command(commands="dl", prefixes=trigger) & filters.user(users))
|
@bot.on_message(filters.command(commands="dl", prefixes=trigger) & filters.user(users))
|
||||||
async def dl(bot, message: Message):
|
async def dl(bot, message: Message):
|
||||||
""" The main Logic Function to download media """
|
"""The main Logic Function to download media"""
|
||||||
response = await bot.send_message(message.chat.id, "`trying to download...`")
|
|
||||||
rw_message = message.text.split()
|
rw_message = message.text.split()
|
||||||
|
reply = message.reply_to_message
|
||||||
|
reply_id = reply.id if reply else None
|
||||||
|
sender_ = message.author_signature or message.from_user.first_name or ""
|
||||||
|
response = await bot.send_message(message.chat.id, "`trying to download...`")
|
||||||
curse_ = ""
|
curse_ = ""
|
||||||
caption = "Shared by : "
|
caption = f"Shared by : {sender_}"
|
||||||
if message.sender_chat:
|
|
||||||
caption += message.author_signature
|
|
||||||
else:
|
|
||||||
caption += message.from_user.first_name
|
|
||||||
check_dl = "failed"
|
check_dl = "failed"
|
||||||
if "-d" in rw_message:
|
if "-d" in rw_message:
|
||||||
doc = True
|
doc = True
|
||||||
@ -146,70 +137,36 @@ async def dl(bot, message: Message):
|
|||||||
curse_ = "#FuckInstagram"
|
curse_ = "#FuckInstagram"
|
||||||
if check_dl == "failed":
|
if check_dl == "failed":
|
||||||
check_dl = await json_dl(iurl=i, caption=caption, doc=doc)
|
check_dl = await json_dl(iurl=i, caption=caption, doc=doc)
|
||||||
|
elif "twitter.com" in i or "https://youtube.com/shorts" in i or "tiktok.com" in i:
|
||||||
if "twitter.com" in i or "https://youtube.com/shorts" in i or "tiktok.com" in i:
|
|
||||||
check_dl = await iyt_dl(url=i)
|
check_dl = await iyt_dl(url=i)
|
||||||
|
elif "www.reddit.com" in i:
|
||||||
|
check_dl = await reddit_dl(url_=i, doc=doc, sender_=sender_)
|
||||||
|
curse_ = "Link doesn't contain any media or is restricted\nTip: Make sure you are sending original post url and not an embedded post."
|
||||||
|
else:
|
||||||
|
pass
|
||||||
if isinstance(check_dl, dict):
|
if isinstance(check_dl, dict):
|
||||||
"""Send Media if response from check dl contains data dict"""
|
|
||||||
if isinstance(check_dl["media"], list):
|
if isinstance(check_dl["media"], list):
|
||||||
for data_ in check_dl["media"]:
|
for vv in check_dl["media"]:
|
||||||
if isinstance(vv, list):
|
if isinstance(vv, list):
|
||||||
"""Send Grouped Media if data contains a list made of smaller lists of 5 medias"""
|
await bot.send_media_group(message.chat.id, media=vv, reply_to_message_id=reply_id)
|
||||||
await bot.send_media_group(
|
|
||||||
message.chat.id,
|
|
||||||
media=data_,
|
|
||||||
reply_to_message_id=message.reply_to_message.id
|
|
||||||
if message.reply_to_message
|
|
||||||
else None,
|
|
||||||
)
|
|
||||||
await asyncio.sleep(3)
|
await asyncio.sleep(3)
|
||||||
else:
|
else:
|
||||||
"""Send Document if data is list of media files"""
|
await bot.send_document(message.chat.id, document=vv, caption=check_dl["caption"] + caption, reply_to_message_id=reply_id, force_document=True)
|
||||||
await bot.send_document(
|
else:
|
||||||
message.chat.id,
|
|
||||||
document=data_,
|
|
||||||
caption=caption,
|
|
||||||
reply_to_message_id=message.reply_to_message.id
|
|
||||||
if message.reply_to_message
|
|
||||||
else None,
|
|
||||||
)
|
|
||||||
""" If media isn't a list then it's a single file to be sent """
|
|
||||||
if isinstance(check_dl["media"], str):
|
|
||||||
if doc:
|
if doc:
|
||||||
await bot.send_document(
|
await bot.send_document(message.chat.id, document=check_dl["media"], caption=check_dl["caption"] + caption, reply_to_message_id=reply_id, force_document=True)
|
||||||
message.chat.id,
|
|
||||||
document=check_dl["media"],
|
|
||||||
caption=caption,
|
|
||||||
reply_to_message_id=message.reply_to_message.id
|
|
||||||
if message.reply_to_message
|
|
||||||
else None,
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
if check_dl["type"] == "img":
|
try:
|
||||||
await bot.send_photo(
|
if check_dl["type"] == "img":
|
||||||
message.chat.id,
|
await bot.send_photo(message.chat.id, photo=check_dl["media"], caption=check_dl["caption"] + caption, reply_to_message_id=reply_id)
|
||||||
photo=check_dl["media"],
|
elif check_dl["type"] == "vid":
|
||||||
caption=caption,
|
await bot.send_video(message.chat.id, video=check_dl["media"], caption=check_dl["caption"] + caption, thumb=check_dl["thumb"], reply_to_message_id=reply_id)
|
||||||
reply_to_message_id=message.reply_to_message.id
|
else:
|
||||||
if message.reply_to_message
|
await bot.send_animation(message.chat.id, animation=check_dl["media"], caption=check_dl["caption"] + caption, reply_to_message_id=reply_id, unsave=True)
|
||||||
else None,
|
except PhotoSaveFileInvalid:
|
||||||
)
|
await bot.send_document(message.chat.id, document=check_dl["media"], caption=check_dl["caption"] + caption, reply_to_message_id=reply_id)
|
||||||
if check_dl["type"] == "vid":
|
except (MediaEmpty, WebpageCurlFailed, ValueError):
|
||||||
try:
|
pass
|
||||||
await bot.send_video(
|
|
||||||
message.chat.id,
|
|
||||||
video=check_dl["media"],
|
|
||||||
caption=caption,
|
|
||||||
thumb=check_dl["thumb"]
|
|
||||||
if os.path.isfile(check_dl["thumb"])
|
|
||||||
else None,
|
|
||||||
reply_to_message_id=message.reply_to_message.id
|
|
||||||
if message.reply_to_message
|
|
||||||
else None,
|
|
||||||
)
|
|
||||||
except (MediaEmpty, WebpageCurlFailed):
|
|
||||||
pass
|
|
||||||
if os.path.exists(str(check_dl["path"])):
|
if os.path.exists(str(check_dl["path"])):
|
||||||
shutil.rmtree(str(check_dl["path"]))
|
shutil.rmtree(str(check_dl["path"]))
|
||||||
check_dl = "done"
|
check_dl = "done"
|
||||||
@ -222,33 +179,18 @@ async def dl(bot, message: Message):
|
|||||||
|
|
||||||
async def iyt_dl(url: str):
|
async def iyt_dl(url: str):
|
||||||
"""Stop handling post url because this only downloads Videos and post might contain images"""
|
"""Stop handling post url because this only downloads Videos and post might contain images"""
|
||||||
if url.startswith("https://www.instagram.com/p/"):
|
if not url.startswith("https://www.instagram.com/reel/"):
|
||||||
return "failed"
|
return "failed"
|
||||||
path_ = time.time()
|
path_ = time.time()
|
||||||
video = f"{path_}/v.mp4"
|
video = f"{path_}/v.mp4"
|
||||||
thumb = f"{path_}/i.png"
|
thumb = f"{path_}/i.png"
|
||||||
_opts = {
|
_opts = {"outtmpl": video, "ignoreerrors": True, "ignore_no_formats_error": True, "format": "bv[ext=mp4]+ba[ext=m4a]/b[ext=mp4]", "quiet": True, "logger": FakeLogger()}
|
||||||
"outtmpl": video,
|
|
||||||
"ignoreerrors": True,
|
|
||||||
"ignore_no_formats_error": True,
|
|
||||||
"format": "bv[ext=mp4]+ba[ext=m4a]/b[ext=mp4]",
|
|
||||||
"quiet": True,
|
|
||||||
"logger": FakeLogger(),
|
|
||||||
}
|
|
||||||
return_val = "failed"
|
return_val = "failed"
|
||||||
try:
|
try:
|
||||||
yt_dlp.YoutubeDL(_opts).download(url)
|
yt_dlp.YoutubeDL(_opts).download(url)
|
||||||
if os.path.isfile(video):
|
if os.path.isfile(video):
|
||||||
call(
|
call(f'''ffmpeg -hide_banner -loglevel error -ss 0.1 -i "{video}" -vframes 1 "{thumb}"''', shell=True)
|
||||||
f'''ffmpeg -hide_banner -loglevel error -ss 0.1 -i "{video}" -vframes 1 "{thumb}"''',
|
return_val = {"path": str(path_), "type": "vid", "media": video, "thumb": thumb if os.path.isfile(thumb) else None, "caption": ""}
|
||||||
shell=True,
|
|
||||||
)
|
|
||||||
return_val = {
|
|
||||||
"path": str(path_),
|
|
||||||
"type": "vid",
|
|
||||||
"media": video,
|
|
||||||
"thumb": thumb,
|
|
||||||
}
|
|
||||||
except BaseException:
|
except BaseException:
|
||||||
pass
|
pass
|
||||||
return return_val
|
return return_val
|
||||||
@ -274,149 +216,85 @@ async def json_dl(iurl: str, doc: bool, caption: str):
|
|||||||
if url["__typename"] == "GraphVideo":
|
if url["__typename"] == "GraphVideo":
|
||||||
url_ = url["video_url"]
|
url_ = url["video_url"]
|
||||||
wget_x = download(url_, d_dir)
|
wget_x = download(url_, d_dir)
|
||||||
call(
|
call(f'''ffmpeg -hide_banner -loglevel error -ss 0.1 -i "{wget_x}" -vframes 1 "{d_dir}/i.png"''', shell=True)
|
||||||
f'''ffmpeg -hide_banner -loglevel error -ss 0.1 -i "{wget_x}" -vframes 1 "{d_dir}/i.png"''',
|
return_val = {"path": d_dir, "type": "vid", "media": wget_x, "thumb": d_dir + "/i.png", "caption": ""}
|
||||||
shell=True,
|
|
||||||
)
|
|
||||||
return_val = { "path": d_dir, "type": "vid", "media": wget_x, "thumb": d_dir + "/i.png" }
|
|
||||||
|
|
||||||
if url["__typename"] == "GraphImage":
|
if url["__typename"] == "GraphImage":
|
||||||
url_ = url["display_url"]
|
url_ = url["display_url"]
|
||||||
wget_x = download(url_, d_dir + "/i.jpg")
|
wget_x = download(url_, d_dir + "/i.jpg")
|
||||||
return_val = { "path": d_dir, "type": "img", "media": wget_x, "thumb": "" }
|
return_val = {"path": d_dir, "type": "img", "media": wget_x, "thumb": None, "caption": ""}
|
||||||
|
|
||||||
if url["__typename"] == "GraphSidecar":
|
if url["__typename"] == "GraphSidecar":
|
||||||
doc_list = []
|
url_list = []
|
||||||
vlist = []
|
|
||||||
vlist2 = []
|
|
||||||
plist = []
|
|
||||||
plist2 = []
|
|
||||||
for i in url["edge_sidecar_to_children"]["edges"]:
|
for i in url["edge_sidecar_to_children"]["edges"]:
|
||||||
if i["node"]["__typename"] == "GraphImage":
|
if i["node"]["__typename"] == "GraphImage":
|
||||||
url_ = i["node"]["display_url"]
|
url_list.append(i["node"]["display_url"])
|
||||||
wget_x = download(url_, d_dir)
|
|
||||||
if wget_x.endswith(".webp"):
|
|
||||||
os.rename(wget_x, wget_x + ".jpg")
|
|
||||||
wget_x = wget_x + ".jpg"
|
|
||||||
if doc:
|
|
||||||
doc_list.append(wget_x)
|
|
||||||
else:
|
|
||||||
if len(plist) >= 5:
|
|
||||||
plist2.append(InputMediaPhoto(media=wget_x, caption=caption))
|
|
||||||
else:
|
|
||||||
plist.append(InputMediaPhoto(media=wget_x, caption=caption))
|
|
||||||
if i["node"]["__typename"] == "GraphVideo":
|
if i["node"]["__typename"] == "GraphVideo":
|
||||||
url_ = i["node"]["video_url"]
|
url_list.append(i["node"]["video_url"])
|
||||||
wget_x = download(url_, d_dir)
|
downloads = await async_download(urls=url_list, path=d_dir, doc=doc, caption=caption + "\n..")
|
||||||
if doc:
|
return_val = {"path": d_dir, "media": downloads}
|
||||||
doc_list.append(wget_x)
|
|
||||||
else:
|
|
||||||
if len(vlist) >= 5:
|
|
||||||
vlist2.append(InputMediaVideo(media=wget_x, caption=caption))
|
|
||||||
else:
|
|
||||||
vlist.append(InputMediaVideo(media=wget_x, caption=caption))
|
|
||||||
if doc:
|
|
||||||
return_val = {"path": d_dir, "media": doc_list}
|
|
||||||
else:
|
|
||||||
return_val = {
|
|
||||||
"path": d_dir,
|
|
||||||
"media": [
|
|
||||||
zz for zz in [plist, plist2, vlist, vlist2] if len(zz) > 0
|
|
||||||
],
|
|
||||||
}
|
|
||||||
except Exception:
|
except Exception:
|
||||||
await bot.send_message(chat_id=log_chat, text=str(traceback.format_exc()))
|
await bot.send_message(chat_id=log_chat, text=str(traceback.format_exc()))
|
||||||
return return_val
|
return return_val
|
||||||
|
|
||||||
|
|
||||||
@bot.on_message(filters.command(commands="rdl", prefixes=trigger) & filters.user(users))
|
|
||||||
async def reddit_dl(bot, message: Message):
|
async def reddit_dl(bot, message: Message):
|
||||||
ext = None
|
link = url_.split("/?")[0] + ".json?limit=1"
|
||||||
del_link = True
|
headers = {"user-agent": "Mozilla/5.0 (Macintosh; PPC Mac OS X 10_8_7 rv:5.0; en-US) AppleWebKit/533.31.5 (KHTML, like Gecko) Version/4.0 Safari/533.31.5"}
|
||||||
rw_message = message.text.split()
|
return_val = "failed"
|
||||||
response = await bot.send_message(
|
try:
|
||||||
chat_id=message.chat.id, text="Trying to download..."
|
async with (aiohttp.ClientSession() as session, session.get(link, headers=headers) as ss):
|
||||||
)
|
response = await ss.json()
|
||||||
if message.sender_chat:
|
json_ = response[0]["data"]["children"][0]["data"]
|
||||||
sender_ = message.author_signature
|
caption = f'__{json_["subreddit_name_prefixed"]}:__\n**{json_["title"]}**\n\n'
|
||||||
else:
|
d_dir = str(time.time())
|
||||||
sender_ = message.from_user.first_name
|
os.mkdir(d_dir)
|
||||||
for link_ in rw_message:
|
is_vid, is_gallery = json_.get("is_video"), json_.get("is_gallery")
|
||||||
if link_.startswith("https://www.reddit.com"):
|
|
||||||
link = link_.split("/?")[0] + ".json?limit=1"
|
if is_vid:
|
||||||
headers = {
|
video = f"{d_dir}/v.mp4"
|
||||||
"user-agent": "Mozilla/5.0 (Macintosh; PPC Mac OS X 10_8_7 rv:5.0; en-US) AppleWebKit/533.31.5 (KHTML, like Gecko) Version/4.0 Safari/533.31.5",
|
thumb = f"{d_dir}/i.png"
|
||||||
}
|
vid_url = json_["secure_media"]["reddit_video"]["hls_url"]
|
||||||
try:
|
call(f'ffmpeg -hide_banner -loglevel error -i "{vid_url.strip()}" -c copy {video}', shell=True)
|
||||||
async with aiohttp.ClientSession() as session:
|
call(f'''ffmpeg -hide_banner -loglevel error -ss 0.1 -i "{video}" -vframes 1 "{thumb}"''', shell=True)
|
||||||
async with session.get(link, headers=headers) as ss:
|
return_val = {"path": d_dir, "type": "vid", "media": video, "thumb": thumb, "caption": caption}
|
||||||
session_response = await ss.json()
|
|
||||||
json_ = session_response[0]["data"]["children"][0]["data"]
|
elif is_gallery:
|
||||||
check_ = json_["secure_media"]
|
grouped_media_urls = [f'https://i.redd.it/{i["media_id"]}.jpg' for i in json_["gallery_data"]["items"]]
|
||||||
title_ = json_["title"]
|
downloads = await async_download(urls=grouped_media_urls, path=d_dir, doc=doc, caption=caption + f"Shared by : {sender_}")
|
||||||
subr = json_["subreddit_name_prefixed"]
|
return_val = {"path": d_dir, "media": downloads}
|
||||||
caption = f"__{subr}:__\n**{title_}**\n\nShared by : {sender_}"
|
|
||||||
d_dir = str(time.time())
|
else:
|
||||||
os.mkdir(d_dir)
|
media_ = json_["url_overridden_by_dest"].strip()
|
||||||
if isinstance(check_, dict):
|
if media_.endswith((".jpg", ".jpeg", ".png", ".webp")):
|
||||||
v = f"{d_dir}/v.mp4"
|
img = download(media_, d_dir)
|
||||||
t = f"{d_dir}/i.png"
|
return_val = {"path": d_dir, "type": "img", "media": img, "thumb": None, "caption": caption}
|
||||||
if "oembed" in check_:
|
elif media_.endswith(".gif"):
|
||||||
vid_url = json_["preview"]["reddit_video_preview"]["fallback_url"]
|
gif = download(media_, d_dir)
|
||||||
await bot.send_animation(
|
return_val = {"path": d_dir, "type": "animation", "media": gif, "thumb": None, "caption": caption}
|
||||||
chat_id=message.chat.id,
|
else:
|
||||||
animation=vid_url,
|
gif_url = json_.get("preview", {}).get("reddit_video_preview", {}).get("fallback_url")
|
||||||
unsave=True,
|
if gif_url:
|
||||||
caption=caption,
|
gif = download(gif_url, d_dir)
|
||||||
)
|
return_val = {"path": d_dir, "type": "animation", "media": gif, "thumb": None, "caption": caption}
|
||||||
else:
|
|
||||||
vid_url = check_["reddit_video"]["hls_url"]
|
except Exception:
|
||||||
call(
|
await bot.send_message(chat_id=log_chat, text=str(traceback.format_exc()))
|
||||||
f'ffmpeg -hide_banner -loglevel error -i "{vid_url.strip()}" -c copy {v}',
|
return return_val
|
||||||
shell=True,
|
|
||||||
)
|
|
||||||
call(
|
async def async_download(urls: list, path: str, doc: bool = False, caption: str = ""):
|
||||||
f'''ffmpeg -ss 0.1 -i "{v}" -vframes 1 "{t}"''', shell=True
|
down_loads = await asyncio.gather(*[asyncio.to_thread(download, url, path) for url in urls])
|
||||||
)
|
if doc:
|
||||||
await message.reply_video(v, caption=caption, thumb=t)
|
return down_loads
|
||||||
else:
|
[os.rename(file, file + ".png") for file in glob.glob(f"{path}/*.webp")]
|
||||||
media_ = json_["url_overridden_by_dest"]
|
files = glob.glob(f"{path}/*")
|
||||||
try:
|
grouped_images = [InputMediaPhoto(img, caption=caption) for img in files if img.endswith((".png", ".jpg", ".jpeg"))]
|
||||||
if media_.strip().endswith(".gif"):
|
grouped_videos = [InputMediaVideo(vid, caption=caption) for vid in files if vid.endswith((".mp4", ".mkv", ".webm"))]
|
||||||
ext = ".gif"
|
return_list = [grouped_images[imgs : imgs + 5] for imgs in range(0, len(grouped_images), 5)] + [
|
||||||
await bot.send_animation(
|
grouped_videos[vids : vids + 5] for vids in range(0, len(grouped_videos), 5)
|
||||||
chat_id=message.chat.id,
|
]
|
||||||
animation=media_,
|
return return_list
|
||||||
unsave=True,
|
|
||||||
caption=caption,
|
|
||||||
)
|
|
||||||
if media_.strip().endswith((".jpg", ".jpeg", ".png", ".webp")):
|
|
||||||
ext = ".png"
|
|
||||||
await message.reply_photo(media_, caption=caption)
|
|
||||||
except (MediaEmpty, WebpageCurlFailed):
|
|
||||||
download(media_, f"{d_dir}/i{ext}")
|
|
||||||
if ext == ".gif":
|
|
||||||
await bot.send_animation(
|
|
||||||
chat_id=message.chat.id,
|
|
||||||
animation=f"{d_dir}/i.gif",
|
|
||||||
unsave=True,
|
|
||||||
caption=caption,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
await message.reply_photo(f"{d_dir}/i.png", caption=caption)
|
|
||||||
except PhotoSaveFileInvalid:
|
|
||||||
await message.reply_document(document=f"{d_dir}/i.png", caption=caption)
|
|
||||||
if os.path.exists(str(d_dir)):
|
|
||||||
shutil.rmtree(str(d_dir))
|
|
||||||
except Exception:
|
|
||||||
del_link = False
|
|
||||||
await bot.send_message(chat_id=log_chat, text=str(traceback.format_exc()))
|
|
||||||
await response.edit("Link doesn't contain any media or is restricted\nTip: Make sure you are sending original post url and not an embedded post.")
|
|
||||||
continue
|
|
||||||
if del_link:
|
|
||||||
await message.delete()
|
|
||||||
await response.delete()
|
|
||||||
|
|
||||||
|
|
||||||
class FakeLogger(object):
|
class FakeLogger(object):
|
||||||
@ -432,7 +310,7 @@ class FakeLogger(object):
|
|||||||
|
|
||||||
async def add_h():
|
async def add_h():
|
||||||
message_id = os.environ.get("MESSAGE")
|
message_id = os.environ.get("MESSAGE")
|
||||||
if message_id == None:
|
if message_id is None:
|
||||||
print("Enter Message id in config.\n")
|
print("Enter Message id in config.\n")
|
||||||
return 1
|
return 1
|
||||||
try:
|
try:
|
||||||
@ -441,7 +319,7 @@ async def add_h():
|
|||||||
print("Log channel not found.\nCheck the variable for mistakes")
|
print("Log channel not found.\nCheck the variable for mistakes")
|
||||||
return 1
|
return 1
|
||||||
chat_list.clear()
|
chat_list.clear()
|
||||||
if msg == None:
|
if msg is None:
|
||||||
print("Message not found\nCheck variable for mistakes\n")
|
print("Message not found\nCheck variable for mistakes\n")
|
||||||
return 1
|
return 1
|
||||||
try:
|
try:
|
||||||
@ -452,27 +330,16 @@ async def add_h():
|
|||||||
chat_list.extend(chats_list)
|
chat_list.extend(chats_list)
|
||||||
social_handler = bot.add_handler(
|
social_handler = bot.add_handler(
|
||||||
MessageHandler(
|
MessageHandler(
|
||||||
dl,
|
dl,((
|
||||||
(
|
|
||||||
(
|
|
||||||
filters.regex(r"^https://www.instagram.com/*")
|
filters.regex(r"^https://www.instagram.com/*")
|
||||||
| filters.regex(r"^https://youtube.com/shorts/*")
|
| filters.regex(r"^https://youtube.com/shorts/*")
|
||||||
| filters.regex(r"^https://twitter.com/*")
|
| filters.regex(r"^https://twitter.com/*")
|
||||||
| filters.regex(r"^https://vm.tiktok.com/*")
|
| filters.regex(r"^https://vm.tiktok.com/*")
|
||||||
)
|
| filters.regex(r"^https://www.reddit.com/*")
|
||||||
& filters.chat(chat_list)
|
) & filters.chat(chat_list))),
|
||||||
),
|
|
||||||
),
|
|
||||||
group=1,
|
group=1,
|
||||||
)
|
)
|
||||||
reddit_handler = bot.add_handler(
|
handler_.append(social_handler)
|
||||||
MessageHandler(
|
|
||||||
reddit_dl,
|
|
||||||
(filters.regex(r"^https://www.reddit.com/*") & filters.chat(chat_list)),
|
|
||||||
),
|
|
||||||
group=2,
|
|
||||||
)
|
|
||||||
handler_.extend([social_handler, reddit_handler])
|
|
||||||
|
|
||||||
|
|
||||||
async def boot():
|
async def boot():
|
||||||
|
Loading…
x
Reference in New Issue
Block a user