How to increase site speed

Our site’s speed is very low, we have disabled widgets and extra JavaScript codes but still site has a very low speed, we have scanned our site but the scanner told has to fix some emberjs codes and we don’t want to modify the discourse source code.
How can we speed up our site?

Time to First Byte

At 450ms for a anon user, this is not that great. Easiest way to make it better is using a better server with faster CPU.

I get half of that here on Meta.

Site Size

2.4MB for a website is crazy. Reduce that ASAP.

Meta sits at 816KB currently, and that’s somewhat heavy, but we have made it better last year, and will make it better this year too.

Requests

Your site have double (120) the request count of Meta. We know HTTP2 makes it better to have many requests, and saved us from the inline-everything dark ages, but this isn’t an excuse to use 100+ requests to draw a webpage.

Plugin stravaganza

Last time I checked you have more than 10 third-party plugins. Well, as some guy said The best code is no code at all.

If you really want all those cool plugins, well you have to pay the speed price.

12 Likes

How should we do This?

What is the best way to increase that?

Where have you hosted your site?

Can you please tell us your specification of the hosting account?

1 Like

Amazon web Service / Micro.

Then the poor loading time is not releated with hosting issues. Anyway, you should try to increase the RAM, as you have significant amount of users in your forum.

1 Like

Maybe not the only cause, but PageSpeed Insights
has two “should fix”

If you have custom script in <head> maybe try it in </body> instead?

Maybe this is using full weight images resized with CSS?

Or is it this plugin?
https://meta.discourse.org/t/topic-list-previews/41630

2 Likes

I’m sorry if I’m reading this wrong. Are you saying that you’re on the AWS t2.micro instance size? The instance size with 1GB of RAM and 1 vCPU? You’re probably hitting CPU constraints, if not RAM constraints due to your plugin load.

From the AWS docs:

T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. The baseline performance and ability to burst are governed by CPU Credits. Each T2 instance receives CPU Credits continuously at a set rate depending on the instance size.

With a Micro instance, you’re getting 6 CPU credits/hour, and that’s not great. You’d probably be better off migrating to the Digital Ocean equivalent tier, which is $10/mo, or even the DO $20/mo tier that offers double the RAM.

8 Likes

Thanks for explaning. I have already mentioned it to him, but I was busy At that moment.

@Alavi1412 should consider this and make neccessary changes

1 Like

Sorry sir for the not-updated information, it’s on a t2 small instance, which is 2G ram and 12cpu/hour.

I don’t thing we need more cpu/ram with the amount of traffic we have, currently. is there any way to measure this?

You could try safe mode and see if the network conditions improve at all. That would tell you that your plugins are a big cause. Rafael’s points are still very valid. You need to have a beefy box if you’re going to run tons of plugins to compensate for the increased load.

4 Likes

regarding your hint about caching:

the warning is about 3 google analytics resources, and two fonts.

  1. we have set the analytics via the admin setting panel, how can we add expiry date to these resources?
  2. how can we add expiry date to other resources? as the answers found via google address the htaccess file, which doesn’t exist in the discourse folder. do you have any idea how I should that?

Thanks for the advice, but I can’t understand what is this size for?! as even when the plugins are deactivated, our site size is only slightly different from this number! how can one find out what is the size for?

Thanks in advance

Well, Chrome DevTools (F12) gives you the size of every asset.

1 Like

Dear Padpors,

I have just tried to debunk the the information. One of the reasons behind the large homepage loading size may be due to the huge numbers of photos in the homepage.

Whooping high number of images, which are scaled and non scaled in your website. That’s the reason of large size

2 Likes

I would like to say, somehow try implementing Lazy loading and load only a few images and few topics, Now your site loads 11*3 = 33 topics in one go. Reduce it to load about 6 topic with images in the homepage. Which will reduce the initial loading images. Doing so will reduce about 1 MB of page size. Hope you will figure it out

1 Like

Photos can be removed. I am concerned

1. number of loaded js files
2. their total volume. 3MB! that’s a lot.

1 Like

Yeah! @kemporu, I didn’t notice that. That is too much for js files. It is enough to make old system and old browsers go unresponsive

But, look at the time taken to load the images, it is way more than those js files.

Way more?! If my math is right, that’s like 23 minutes :scream:
If that is with the Topic List Previews plugin disabled, that leaves the avatars?

Is the site using a CDN?

1 Like

I guess not! If they had used CDN’s the loading time might have been much better.